Test Report: Hyperkit_macOS 19711

                    
                      f2dddbc2cec1d99a0bb3d71de73f46a47f499a62:2024-09-26:36389
                    
                

Test fail (20/217)

x
+
TestOffline (195.51s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-713000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p offline-docker-713000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit : exit status 80 (3m10.10909639s)

                                                
                                                
-- stdout --
	* [offline-docker-713000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1128/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1128/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "offline-docker-713000" primary control-plane node in "offline-docker-713000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "offline-docker-713000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 18:25:45.994373    5953 out.go:345] Setting OutFile to fd 1 ...
	I0926 18:25:45.994690    5953 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:25:45.994697    5953 out.go:358] Setting ErrFile to fd 2...
	I0926 18:25:45.994702    5953 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:25:45.995028    5953 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1128/.minikube/bin
	I0926 18:25:45.996901    5953 out.go:352] Setting JSON to false
	I0926 18:25:46.025745    5953 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":5116,"bootTime":1727395230,"procs":437,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0926 18:25:46.025897    5953 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 18:25:46.088931    5953 out.go:177] * [offline-docker-713000] minikube v1.34.0 on Darwin 14.6.1
	I0926 18:25:46.129976    5953 notify.go:220] Checking for updates...
	I0926 18:25:46.153891    5953 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 18:25:46.238823    5953 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1128/kubeconfig
	I0926 18:25:46.260777    5953 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0926 18:25:46.282880    5953 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 18:25:46.307104    5953 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1128/.minikube
	I0926 18:25:46.328676    5953 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 18:25:46.350022    5953 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 18:25:46.378836    5953 out.go:177] * Using the hyperkit driver based on user configuration
	I0926 18:25:46.421069    5953 start.go:297] selected driver: hyperkit
	I0926 18:25:46.421096    5953 start.go:901] validating driver "hyperkit" against <nil>
	I0926 18:25:46.421115    5953 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 18:25:46.426161    5953 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:25:46.426332    5953 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19711-1128/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0926 18:25:46.434724    5953 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0926 18:25:46.438384    5953 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 18:25:46.438402    5953 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0926 18:25:46.438435    5953 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0926 18:25:46.438675    5953 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 18:25:46.438711    5953 cni.go:84] Creating CNI manager for ""
	I0926 18:25:46.438746    5953 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 18:25:46.438752    5953 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0926 18:25:46.438824    5953 start.go:340] cluster config:
	{Name:offline-docker-713000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-713000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 18:25:46.438905    5953 iso.go:125] acquiring lock: {Name:mka8a9c5a237c1e4ae233281d2ff7965d13a843d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:25:46.505738    5953 out.go:177] * Starting "offline-docker-713000" primary control-plane node in "offline-docker-713000" cluster
	I0926 18:25:46.563941    5953 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 18:25:46.564012    5953 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0926 18:25:46.564037    5953 cache.go:56] Caching tarball of preloaded images
	I0926 18:25:46.564262    5953 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0926 18:25:46.564280    5953 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 18:25:46.564821    5953 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/offline-docker-713000/config.json ...
	I0926 18:25:46.564872    5953 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/offline-docker-713000/config.json: {Name:mkf0917d28aa9d382248e320800a1c722e73cbe4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 18:25:46.585691    5953 start.go:360] acquireMachinesLock for offline-docker-713000: {Name:mk62b0ec9b6170d1971239573141a625c4a2621e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:25:46.585817    5953 start.go:364] duration metric: took 94.85µs to acquireMachinesLock for "offline-docker-713000"
	I0926 18:25:46.585860    5953 start.go:93] Provisioning new machine with config: &{Name:offline-docker-713000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-713000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 18:25:46.585930    5953 start.go:125] createHost starting for "" (driver="hyperkit")
	I0926 18:25:46.608727    5953 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0926 18:25:46.608879    5953 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 18:25:46.608923    5953 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 18:25:46.617782    5953 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53786
	I0926 18:25:46.618129    5953 main.go:141] libmachine: () Calling .GetVersion
	I0926 18:25:46.618540    5953 main.go:141] libmachine: Using API Version  1
	I0926 18:25:46.618552    5953 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 18:25:46.618772    5953 main.go:141] libmachine: () Calling .GetMachineName
	I0926 18:25:46.618891    5953 main.go:141] libmachine: (offline-docker-713000) Calling .GetMachineName
	I0926 18:25:46.619039    5953 main.go:141] libmachine: (offline-docker-713000) Calling .DriverName
	I0926 18:25:46.619157    5953 start.go:159] libmachine.API.Create for "offline-docker-713000" (driver="hyperkit")
	I0926 18:25:46.619182    5953 client.go:168] LocalClient.Create starting
	I0926 18:25:46.619218    5953 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem
	I0926 18:25:46.619274    5953 main.go:141] libmachine: Decoding PEM data...
	I0926 18:25:46.619289    5953 main.go:141] libmachine: Parsing certificate...
	I0926 18:25:46.619377    5953 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem
	I0926 18:25:46.619415    5953 main.go:141] libmachine: Decoding PEM data...
	I0926 18:25:46.619427    5953 main.go:141] libmachine: Parsing certificate...
	I0926 18:25:46.619441    5953 main.go:141] libmachine: Running pre-create checks...
	I0926 18:25:46.619449    5953 main.go:141] libmachine: (offline-docker-713000) Calling .PreCreateCheck
	I0926 18:25:46.619523    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:25:46.619671    5953 main.go:141] libmachine: (offline-docker-713000) Calling .GetConfigRaw
	I0926 18:25:46.620262    5953 main.go:141] libmachine: Creating machine...
	I0926 18:25:46.620281    5953 main.go:141] libmachine: (offline-docker-713000) Calling .Create
	I0926 18:25:46.620362    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:25:46.620500    5953 main.go:141] libmachine: (offline-docker-713000) DBG | I0926 18:25:46.620353    5976 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19711-1128/.minikube
	I0926 18:25:46.620556    5953 main.go:141] libmachine: (offline-docker-713000) Downloading /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1128/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0926 18:25:47.109581    5953 main.go:141] libmachine: (offline-docker-713000) DBG | I0926 18:25:47.109507    5976 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000/id_rsa...
	I0926 18:25:47.204689    5953 main.go:141] libmachine: (offline-docker-713000) DBG | I0926 18:25:47.204575    5976 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000/offline-docker-713000.rawdisk...
	I0926 18:25:47.204716    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Writing magic tar header
	I0926 18:25:47.204755    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Writing SSH key tar header
	I0926 18:25:47.225851    5953 main.go:141] libmachine: (offline-docker-713000) DBG | I0926 18:25:47.225713    5976 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000 ...
	I0926 18:25:47.613008    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:25:47.613025    5953 main.go:141] libmachine: (offline-docker-713000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000/hyperkit.pid
	I0926 18:25:47.613048    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Using UUID f60320be-bf0b-48da-b3ca-e89f303e67ca
	I0926 18:25:47.922299    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Generated MAC 66:ff:f5:b4:0:b6
	I0926 18:25:47.922328    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-713000
	I0926 18:25:47.922393    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:25:47 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f60320be-bf0b-48da-b3ca-e89f303e67ca", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000b01e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I0926 18:25:47.922431    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:25:47 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f60320be-bf0b-48da-b3ca-e89f303e67ca", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000b01e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I0926 18:25:47.922524    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:25:47 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "f60320be-bf0b-48da-b3ca-e89f303e67ca", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000/offline-docker-713000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000/bzimage,
/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-713000"}
	I0926 18:25:47.922570    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:25:47 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U f60320be-bf0b-48da-b3ca-e89f303e67ca -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000/offline-docker-713000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000/console-ring -f kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000/bzimage,/Users/jenkins/minikube-integration/19711-1128/.minikube/machi
nes/offline-docker-713000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-713000"
	I0926 18:25:47.922591    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:25:47 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0926 18:25:47.926281    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:25:47 DEBUG: hyperkit: Pid is 6000
	I0926 18:25:47.926744    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 0
	I0926 18:25:47.926761    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:25:47.926874    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6000
	I0926 18:25:47.927872    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for 66:ff:f5:b4:0:b6 in /var/db/dhcpd_leases ...
	I0926 18:25:47.927915    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:25:47.927948    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:25:47.927974    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:25:47.927989    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:25:47.928005    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:25:47.928020    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:25:47.928036    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:25:47.928067    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:25:47.928094    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:25:47.928122    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:25:47.928137    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:25:47.928145    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:25:47.928154    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:25:47.928162    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:25:47.928169    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:25:47.928184    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:25:47.928198    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:25:47.928223    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:25:47.928239    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:25:47.934105    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:25:47 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I0926 18:25:47.987387    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:25:47 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0926 18:25:48.007332    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:25:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 18:25:48.007356    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:25:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 18:25:48.007366    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:25:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 18:25:48.007373    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:25:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 18:25:48.383674    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:25:48 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0926 18:25:48.383690    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:25:48 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0926 18:25:48.499238    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:25:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 18:25:48.499268    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:25:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 18:25:48.499279    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:25:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 18:25:48.499289    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:25:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 18:25:48.500131    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:25:48 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0926 18:25:48.500139    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:25:48 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0926 18:25:49.929851    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 1
	I0926 18:25:49.929865    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:25:49.929949    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6000
	I0926 18:25:49.930711    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for 66:ff:f5:b4:0:b6 in /var/db/dhcpd_leases ...
	I0926 18:25:49.930771    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:25:49.930785    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:25:49.930797    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:25:49.930805    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:25:49.930811    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:25:49.930819    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:25:49.930833    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:25:49.930848    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:25:49.930857    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:25:49.930866    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:25:49.930873    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:25:49.930880    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:25:49.930888    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:25:49.930896    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:25:49.930903    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:25:49.930911    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:25:49.930921    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:25:49.930931    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:25:49.930938    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:25:51.931719    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 2
	I0926 18:25:51.931735    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:25:51.931816    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6000
	I0926 18:25:51.932585    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for 66:ff:f5:b4:0:b6 in /var/db/dhcpd_leases ...
	I0926 18:25:51.932644    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:25:51.932655    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:25:51.932664    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:25:51.932671    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:25:51.932692    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:25:51.932698    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:25:51.932712    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:25:51.932719    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:25:51.932726    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:25:51.932732    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:25:51.932754    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:25:51.932767    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:25:51.932785    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:25:51.932805    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:25:51.932812    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:25:51.932819    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:25:51.932824    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:25:51.932831    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:25:51.932838    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:25:53.899660    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:25:53 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0926 18:25:53.899864    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:25:53 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0926 18:25:53.899876    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:25:53 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0926 18:25:53.919398    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:25:53 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0926 18:25:53.934723    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 3
	I0926 18:25:53.934738    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:25:53.934816    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6000
	I0926 18:25:53.935900    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for 66:ff:f5:b4:0:b6 in /var/db/dhcpd_leases ...
	I0926 18:25:53.935972    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:25:53.936001    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:25:53.936010    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:25:53.936023    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:25:53.936032    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:25:53.936041    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:25:53.936049    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:25:53.936061    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:25:53.936069    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:25:53.936079    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:25:53.936096    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:25:53.936117    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:25:53.936170    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:25:53.936188    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:25:53.936216    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:25:53.936225    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:25:53.936237    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:25:53.936249    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:25:53.936262    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:25:55.938176    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 4
	I0926 18:25:55.938192    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:25:55.938282    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6000
	I0926 18:25:55.939078    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for 66:ff:f5:b4:0:b6 in /var/db/dhcpd_leases ...
	I0926 18:25:55.939119    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:25:55.939132    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:25:55.939140    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:25:55.939148    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:25:55.939159    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:25:55.939167    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:25:55.939174    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:25:55.939181    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:25:55.939192    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:25:55.939201    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:25:55.939208    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:25:55.939213    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:25:55.939219    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:25:55.939227    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:25:55.939234    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:25:55.939241    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:25:55.939258    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:25:55.939269    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:25:55.939278    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:25:57.941343    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 5
	I0926 18:25:57.941361    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:25:57.941411    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6000
	I0926 18:25:57.942219    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for 66:ff:f5:b4:0:b6 in /var/db/dhcpd_leases ...
	I0926 18:25:57.942265    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:25:57.942277    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:25:57.942287    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:25:57.942299    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:25:57.942310    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:25:57.942327    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:25:57.942336    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:25:57.942346    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:25:57.942364    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:25:57.942376    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:25:57.942395    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:25:57.942406    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:25:57.942432    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:25:57.942448    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:25:57.942458    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:25:57.942471    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:25:57.942481    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:25:57.942491    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:25:57.942504    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:25:59.942510    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 6
	I0926 18:25:59.942523    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:25:59.942582    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6000
	I0926 18:25:59.943370    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for 66:ff:f5:b4:0:b6 in /var/db/dhcpd_leases ...
	I0926 18:25:59.943419    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:25:59.943433    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:25:59.943442    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:25:59.943449    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:25:59.943455    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:25:59.943475    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:25:59.943481    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:25:59.943487    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:25:59.943493    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:25:59.943499    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:25:59.943505    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:25:59.943511    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:25:59.943519    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:25:59.943526    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:25:59.943534    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:25:59.943543    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:25:59.943551    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:25:59.943564    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:25:59.943576    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:26:01.944235    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 7
	I0926 18:26:01.944251    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:26:01.944346    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6000
	I0926 18:26:01.945143    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for 66:ff:f5:b4:0:b6 in /var/db/dhcpd_leases ...
	I0926 18:26:01.945197    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:26:01.945205    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:26:01.945216    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:26:01.945223    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:26:01.945229    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:26:01.945235    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:26:01.945268    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:26:01.945278    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:26:01.945287    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:26:01.945293    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:26:01.945309    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:26:01.945316    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:26:01.945325    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:26:01.945335    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:26:01.945344    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:26:01.945352    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:26:01.945359    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:26:01.945367    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:26:01.945375    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:26:03.945832    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 8
	I0926 18:26:03.945844    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:26:03.945927    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6000
	I0926 18:26:03.946891    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for 66:ff:f5:b4:0:b6 in /var/db/dhcpd_leases ...
	I0926 18:26:03.946946    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:26:03.946956    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:26:03.946965    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:26:03.946971    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:26:03.946977    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:26:03.946985    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:26:03.946992    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:26:03.946999    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:26:03.947004    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:26:03.947010    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:26:03.947033    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:26:03.947056    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:26:03.947070    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:26:03.947081    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:26:03.947089    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:26:03.947096    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:26:03.947104    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:26:03.947111    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:26:03.947129    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:26:05.948118    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 9
	I0926 18:26:05.948134    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:26:05.948217    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6000
	I0926 18:26:05.948994    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for 66:ff:f5:b4:0:b6 in /var/db/dhcpd_leases ...
	I0926 18:26:05.949043    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:26:05.949051    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:26:05.949062    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:26:05.949073    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:26:05.949084    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:26:05.949090    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:26:05.949096    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:26:05.949102    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:26:05.949116    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:26:05.949127    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:26:05.949141    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:26:05.949149    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:26:05.949156    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:26:05.949163    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:26:05.949169    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:26:05.949174    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:26:05.949180    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:26:05.949186    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:26:05.949193    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:26:07.950501    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 10
	I0926 18:26:07.950515    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:26:07.950526    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6000
	I0926 18:26:07.951483    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for 66:ff:f5:b4:0:b6 in /var/db/dhcpd_leases ...
	I0926 18:26:07.951531    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:26:07.951543    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:26:07.951550    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:26:07.951555    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:26:07.951606    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:26:07.951618    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:26:07.951625    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:26:07.951632    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:26:07.951645    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:26:07.951660    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:26:07.951682    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:26:07.951692    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:26:07.951700    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:26:07.951705    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:26:07.951713    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:26:07.951720    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:26:07.951737    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:26:07.951745    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:26:07.951752    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:26:09.952066    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 11
	I0926 18:26:09.952082    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:26:09.952156    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6000
	I0926 18:26:09.952951    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for 66:ff:f5:b4:0:b6 in /var/db/dhcpd_leases ...
	I0926 18:26:09.953004    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:26:09.953014    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:26:09.953023    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:26:09.953029    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:26:09.953046    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:26:09.953059    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:26:09.953066    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:26:09.953072    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:26:09.953095    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:26:09.953105    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:26:09.953113    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:26:09.953120    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:26:09.953132    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:26:09.953149    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:26:09.953158    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:26:09.953171    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:26:09.953178    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:26:09.953186    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:26:09.953193    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:26:11.954029    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 12
	I0926 18:26:11.954043    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:26:11.954132    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6000
	I0926 18:26:11.955081    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for 66:ff:f5:b4:0:b6 in /var/db/dhcpd_leases ...
	I0926 18:26:11.955146    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:26:11.955155    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:26:11.955163    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:26:11.955169    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:26:11.955178    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:26:11.955185    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:26:11.955200    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:26:11.955214    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:26:11.955221    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:26:11.955226    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:26:11.955246    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:26:11.955255    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:26:11.955263    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:26:11.955270    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:26:11.955277    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:26:11.955292    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:26:11.955299    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:26:11.955306    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:26:11.955316    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:26:13.957205    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 13
	I0926 18:26:13.957216    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:26:13.957299    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6000
	I0926 18:26:13.958141    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for 66:ff:f5:b4:0:b6 in /var/db/dhcpd_leases ...
	I0926 18:26:13.958154    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:26:13.958162    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:26:13.958197    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:26:13.958208    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:26:13.958217    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:26:13.958226    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:26:13.958232    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:26:13.958240    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:26:13.958254    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:26:13.958262    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:26:13.958272    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:26:13.958278    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:26:13.958284    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:26:13.958292    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:26:13.958298    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:26:13.958306    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:26:13.958313    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:26:13.958326    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:26:13.958337    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:26:15.958828    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 14
	I0926 18:26:15.958841    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:26:15.958906    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6000
	I0926 18:26:15.959698    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for 66:ff:f5:b4:0:b6 in /var/db/dhcpd_leases ...
	I0926 18:26:15.959741    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:26:15.959751    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:26:15.959760    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:26:15.959767    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:26:15.959773    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:26:15.959778    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:26:15.959785    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:26:15.959792    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:26:15.959798    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:26:15.959803    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:26:15.959811    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:26:15.959827    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:26:15.959837    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:26:15.959845    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:26:15.959853    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:26:15.959861    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:26:15.959869    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:26:15.959882    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:26:15.959890    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:26:17.960660    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 15
	I0926 18:26:17.960676    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:26:17.960725    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6000
	I0926 18:26:17.961539    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for 66:ff:f5:b4:0:b6 in /var/db/dhcpd_leases ...
	I0926 18:26:17.961602    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:26:17.961622    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:26:17.961631    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:26:17.961637    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:26:17.961644    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:26:17.961652    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:26:17.961659    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:26:17.961670    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:26:17.961681    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:26:17.961689    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:26:17.961695    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:26:17.961703    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:26:17.961710    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:26:17.961718    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:26:17.961724    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:26:17.961730    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:26:17.961737    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:26:17.961744    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:26:17.961751    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:26:19.962514    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 16
	I0926 18:26:19.962537    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:26:19.962628    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6000
	I0926 18:26:19.963388    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for 66:ff:f5:b4:0:b6 in /var/db/dhcpd_leases ...
	I0926 18:26:19.963446    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:26:19.963457    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:26:19.963464    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:26:19.963472    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:26:19.963487    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:26:19.963495    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:26:19.963502    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:26:19.963507    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:26:19.963514    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:26:19.963522    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:26:19.963528    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:26:19.963535    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:26:19.963542    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:26:19.963547    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:26:19.963557    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:26:19.963574    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:26:19.963590    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:26:19.963603    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:26:19.963613    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:26:21.964029    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 17
	I0926 18:26:21.964043    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:26:21.964099    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6000
	I0926 18:26:21.964882    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for 66:ff:f5:b4:0:b6 in /var/db/dhcpd_leases ...
	I0926 18:26:21.965002    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:26:21.965016    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:26:21.965024    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:26:21.965030    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:26:21.965036    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:26:21.965057    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:26:21.965074    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:26:21.965083    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:26:21.965089    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:26:21.965095    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:26:21.965101    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:26:21.965108    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:26:21.965124    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:26:21.965137    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:26:21.965152    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:26:21.965160    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:26:21.965167    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:26:21.965175    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:26:21.965183    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:26:23.965792    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 18
	I0926 18:26:23.965805    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:26:23.965926    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6000
	I0926 18:26:23.966706    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for 66:ff:f5:b4:0:b6 in /var/db/dhcpd_leases ...
	I0926 18:26:23.966744    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:26:23.966753    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:26:23.966769    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:26:23.966778    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:26:23.966797    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:26:23.966813    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:26:23.966820    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:26:23.966835    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:26:23.966845    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:26:23.966858    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:26:23.966868    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:26:23.966880    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:26:23.966885    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:26:23.966892    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:26:23.966903    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:26:23.966911    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:26:23.966920    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:26:23.966926    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:26:23.966932    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:26:25.968352    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 19
	I0926 18:26:25.968363    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:26:25.968441    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6000
	I0926 18:26:25.969226    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for 66:ff:f5:b4:0:b6 in /var/db/dhcpd_leases ...
	I0926 18:26:25.969281    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:26:25.969289    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:26:25.969297    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:26:25.969302    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:26:25.969316    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:26:25.969326    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:26:25.969332    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:26:25.969346    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:26:25.969357    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:26:25.969366    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:26:25.969375    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:26:25.969382    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:26:25.969396    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:26:25.969411    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:26:25.969418    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:26:25.969423    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:26:25.969434    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:26:25.969446    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:26:25.969454    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:26:27.970492    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 20
	I0926 18:26:27.970505    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:26:27.970567    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6000
	I0926 18:26:27.971382    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for 66:ff:f5:b4:0:b6 in /var/db/dhcpd_leases ...
	I0926 18:26:27.971438    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:26:27.971450    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:26:27.971465    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:26:27.971471    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:26:27.971477    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:26:27.971484    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:26:27.971492    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:26:27.971500    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:26:27.971505    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:26:27.971523    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:26:27.971530    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:26:27.971537    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:26:27.971545    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:26:27.971559    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:26:27.971571    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:26:27.971578    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:26:27.971586    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:26:27.971593    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:26:27.971600    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:26:29.973375    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 21
	I0926 18:26:29.973390    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:26:29.973498    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6000
	I0926 18:26:29.974594    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for 66:ff:f5:b4:0:b6 in /var/db/dhcpd_leases ...
	I0926 18:26:29.974653    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:26:29.974679    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:26:29.974695    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:26:29.974702    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:26:29.974726    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:26:29.974735    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:26:29.974744    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:26:29.974752    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:26:29.974758    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:26:29.974766    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:26:29.974773    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:26:29.974778    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:26:29.974792    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:26:29.974800    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:26:29.974807    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:26:29.974813    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:26:29.974824    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:26:29.974835    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:26:29.974845    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:26:31.976113    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 22
	I0926 18:26:31.976129    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:26:31.976196    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6000
	I0926 18:26:31.977151    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for 66:ff:f5:b4:0:b6 in /var/db/dhcpd_leases ...
	I0926 18:26:31.977195    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:26:31.977203    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:26:31.977228    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:26:31.977256    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:26:31.977269    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:26:31.977289    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:26:31.977295    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:26:31.977324    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:26:31.977329    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:26:31.977344    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:26:31.977357    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:26:31.977364    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:26:31.977370    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:26:31.977381    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:26:31.977389    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:26:31.977395    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:26:31.977401    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:26:31.977407    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:26:31.977415    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:26:33.979357    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 23
	I0926 18:26:33.979371    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:26:33.979427    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6000
	I0926 18:26:33.980203    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for 66:ff:f5:b4:0:b6 in /var/db/dhcpd_leases ...
	I0926 18:26:33.980219    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:26:33.980239    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:26:33.980250    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:26:33.980258    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:26:33.980283    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:26:33.980295    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:26:33.980304    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:26:33.980312    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:26:33.980319    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:26:33.980327    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:26:33.980333    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:26:33.980339    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:26:33.980354    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:26:33.980364    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:26:33.980381    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:26:33.980392    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:26:33.980408    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:26:33.980414    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:26:33.980421    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:26:35.982432    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 24
	I0926 18:26:35.982447    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:26:35.982517    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6000
	I0926 18:26:35.983292    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for 66:ff:f5:b4:0:b6 in /var/db/dhcpd_leases ...
	I0926 18:26:35.983339    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:26:35.983347    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:26:35.983365    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:26:35.983395    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:26:35.983405    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:26:35.983414    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:26:35.983431    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:26:35.983442    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:26:35.983450    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:26:35.983455    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:26:35.983470    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:26:35.983488    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:26:35.983498    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:26:35.983506    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:26:35.983513    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:26:35.983518    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:26:35.983524    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:26:35.983530    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:26:35.983536    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:26:37.984993    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 25
	I0926 18:26:37.985009    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:26:37.985084    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6000
	I0926 18:26:37.985872    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for 66:ff:f5:b4:0:b6 in /var/db/dhcpd_leases ...
	I0926 18:26:37.985938    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:26:37.985950    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:26:37.985971    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:26:37.985980    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:26:37.985992    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:26:37.986001    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:26:37.986016    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:26:37.986024    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:26:37.986038    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:26:37.986046    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:26:37.986061    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:26:37.986075    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:26:37.986083    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:26:37.986091    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:26:37.986098    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:26:37.986105    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:26:37.986114    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:26:37.986133    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:26:37.986146    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:26:39.987799    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 26
	I0926 18:26:39.987812    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:26:39.987893    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6000
	I0926 18:26:39.988716    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for 66:ff:f5:b4:0:b6 in /var/db/dhcpd_leases ...
	I0926 18:26:39.988766    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:26:39.988775    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:26:39.988786    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:26:39.988795    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:26:39.988804    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:26:39.988820    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:26:39.988834    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:26:39.988848    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:26:39.988863    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:26:39.988871    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:26:39.988880    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:26:39.988885    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:26:39.988892    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:26:39.988897    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:26:39.988908    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:26:39.988932    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:26:39.988943    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:26:39.988954    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:26:39.988964    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:26:41.990963    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 27
	I0926 18:26:41.990977    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:26:41.991011    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6000
	I0926 18:26:41.991815    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for 66:ff:f5:b4:0:b6 in /var/db/dhcpd_leases ...
	I0926 18:26:41.991860    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:26:41.991871    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:26:41.991880    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:26:41.991888    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:26:41.991903    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:26:41.991912    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:26:41.991920    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:26:41.991930    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:26:41.991941    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:26:41.991950    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:26:41.991979    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:26:41.991990    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:26:41.992005    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:26:41.992014    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:26:41.992020    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:26:41.992027    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:26:41.992034    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:26:41.992041    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:26:41.992050    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:26:43.992841    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 28
	I0926 18:26:43.992853    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:26:43.992950    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6000
	I0926 18:26:43.993732    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for 66:ff:f5:b4:0:b6 in /var/db/dhcpd_leases ...
	I0926 18:26:43.993786    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:26:43.993794    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:26:43.993813    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:26:43.993837    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:26:43.993849    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:26:43.993858    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:26:43.993867    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:26:43.993874    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:26:43.993879    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:26:43.993894    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:26:43.993905    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:26:43.993913    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:26:43.993921    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:26:43.993935    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:26:43.993944    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:26:43.993951    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:26:43.993959    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:26:43.993970    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:26:43.993978    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:26:45.994311    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 29
	I0926 18:26:45.994326    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:26:45.994391    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6000
	I0926 18:26:45.995349    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for 66:ff:f5:b4:0:b6 in /var/db/dhcpd_leases ...
	I0926 18:26:45.995395    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:26:45.995402    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:26:45.995411    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:26:45.995420    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:26:45.995428    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:26:45.995440    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:26:45.995453    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:26:45.995462    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:26:45.995470    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:26:45.995478    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:26:45.995502    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:26:45.995515    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:26:45.995522    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:26:45.995530    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:26:45.995536    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:26:45.995543    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:26:45.995568    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:26:45.995583    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:26:45.995606    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:26:47.997649    5953 client.go:171] duration metric: took 1m1.377901054s to LocalClient.Create
	I0926 18:26:49.999800    5953 start.go:128] duration metric: took 1m3.413281752s to createHost
	I0926 18:26:49.999817    5953 start.go:83] releasing machines lock for "offline-docker-713000", held for 1m3.413415333s
	W0926 18:26:49.999846    5953 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 66:ff:f5:b4:0:b6
	I0926 18:26:50.000170    5953 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 18:26:50.000200    5953 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 18:26:50.008977    5953 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53822
	I0926 18:26:50.009318    5953 main.go:141] libmachine: () Calling .GetVersion
	I0926 18:26:50.009687    5953 main.go:141] libmachine: Using API Version  1
	I0926 18:26:50.009702    5953 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 18:26:50.009903    5953 main.go:141] libmachine: () Calling .GetMachineName
	I0926 18:26:50.010265    5953 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 18:26:50.010288    5953 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 18:26:50.018849    5953 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53824
	I0926 18:26:50.019203    5953 main.go:141] libmachine: () Calling .GetVersion
	I0926 18:26:50.019547    5953 main.go:141] libmachine: Using API Version  1
	I0926 18:26:50.019557    5953 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 18:26:50.019816    5953 main.go:141] libmachine: () Calling .GetMachineName
	I0926 18:26:50.019947    5953 main.go:141] libmachine: (offline-docker-713000) Calling .GetState
	I0926 18:26:50.020039    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:26:50.020117    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6000
	I0926 18:26:50.021107    5953 main.go:141] libmachine: (offline-docker-713000) Calling .DriverName
	I0926 18:26:50.042313    5953 out.go:177] * Deleting "offline-docker-713000" in hyperkit ...
	I0926 18:26:50.063181    5953 main.go:141] libmachine: (offline-docker-713000) Calling .Remove
	I0926 18:26:50.063332    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:26:50.063347    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:26:50.063404    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6000
	I0926 18:26:50.064374    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:26:50.064444    5953 main.go:141] libmachine: (offline-docker-713000) DBG | waiting for graceful shutdown
	I0926 18:26:51.066599    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:26:51.066623    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6000
	I0926 18:26:51.067592    5953 main.go:141] libmachine: (offline-docker-713000) DBG | waiting for graceful shutdown
	I0926 18:26:52.068491    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:26:52.068594    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6000
	I0926 18:26:52.070358    5953 main.go:141] libmachine: (offline-docker-713000) DBG | waiting for graceful shutdown
	I0926 18:26:53.072255    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:26:53.072328    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6000
	I0926 18:26:53.073084    5953 main.go:141] libmachine: (offline-docker-713000) DBG | waiting for graceful shutdown
	I0926 18:26:54.073812    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:26:54.073891    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6000
	I0926 18:26:54.074573    5953 main.go:141] libmachine: (offline-docker-713000) DBG | waiting for graceful shutdown
	I0926 18:26:55.076704    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:26:55.076792    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6000
	I0926 18:26:55.077807    5953 main.go:141] libmachine: (offline-docker-713000) DBG | sending sigkill
	I0926 18:26:55.077816    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:26:55.089842    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:26:55 WARN : hyperkit: failed to read stdout: EOF
	I0926 18:26:55.089859    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:26:55 WARN : hyperkit: failed to read stderr: EOF
	W0926 18:26:55.108235    5953 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 66:ff:f5:b4:0:b6
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 66:ff:f5:b4:0:b6
	I0926 18:26:55.108255    5953 start.go:729] Will try again in 5 seconds ...
	I0926 18:27:00.110402    5953 start.go:360] acquireMachinesLock for offline-docker-713000: {Name:mk62b0ec9b6170d1971239573141a625c4a2621e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:27:52.819425    5953 start.go:364] duration metric: took 52.70851654s to acquireMachinesLock for "offline-docker-713000"
	I0926 18:27:52.819467    5953 start.go:93] Provisioning new machine with config: &{Name:offline-docker-713000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-713000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 18:27:52.819518    5953 start.go:125] createHost starting for "" (driver="hyperkit")
	I0926 18:27:52.840815    5953 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0926 18:27:52.840921    5953 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 18:27:52.840942    5953 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 18:27:52.849672    5953 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53832
	I0926 18:27:52.850158    5953 main.go:141] libmachine: () Calling .GetVersion
	I0926 18:27:52.850648    5953 main.go:141] libmachine: Using API Version  1
	I0926 18:27:52.850661    5953 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 18:27:52.850991    5953 main.go:141] libmachine: () Calling .GetMachineName
	I0926 18:27:52.851130    5953 main.go:141] libmachine: (offline-docker-713000) Calling .GetMachineName
	I0926 18:27:52.851234    5953 main.go:141] libmachine: (offline-docker-713000) Calling .DriverName
	I0926 18:27:52.851368    5953 start.go:159] libmachine.API.Create for "offline-docker-713000" (driver="hyperkit")
	I0926 18:27:52.851385    5953 client.go:168] LocalClient.Create starting
	I0926 18:27:52.851408    5953 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem
	I0926 18:27:52.851457    5953 main.go:141] libmachine: Decoding PEM data...
	I0926 18:27:52.851467    5953 main.go:141] libmachine: Parsing certificate...
	I0926 18:27:52.851506    5953 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem
	I0926 18:27:52.851543    5953 main.go:141] libmachine: Decoding PEM data...
	I0926 18:27:52.851555    5953 main.go:141] libmachine: Parsing certificate...
	I0926 18:27:52.851566    5953 main.go:141] libmachine: Running pre-create checks...
	I0926 18:27:52.851571    5953 main.go:141] libmachine: (offline-docker-713000) Calling .PreCreateCheck
	I0926 18:27:52.851648    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:27:52.851680    5953 main.go:141] libmachine: (offline-docker-713000) Calling .GetConfigRaw
	I0926 18:27:52.883637    5953 main.go:141] libmachine: Creating machine...
	I0926 18:27:52.883658    5953 main.go:141] libmachine: (offline-docker-713000) Calling .Create
	I0926 18:27:52.883741    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:27:52.883858    5953 main.go:141] libmachine: (offline-docker-713000) DBG | I0926 18:27:52.883732    6156 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19711-1128/.minikube
	I0926 18:27:52.883905    5953 main.go:141] libmachine: (offline-docker-713000) Downloading /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1128/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0926 18:27:53.083022    5953 main.go:141] libmachine: (offline-docker-713000) DBG | I0926 18:27:53.082911    6156 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000/id_rsa...
	I0926 18:27:53.361886    5953 main.go:141] libmachine: (offline-docker-713000) DBG | I0926 18:27:53.361802    6156 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000/offline-docker-713000.rawdisk...
	I0926 18:27:53.361898    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Writing magic tar header
	I0926 18:27:53.361907    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Writing SSH key tar header
	I0926 18:27:53.362495    5953 main.go:141] libmachine: (offline-docker-713000) DBG | I0926 18:27:53.362452    6156 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000 ...
	I0926 18:27:53.725937    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:27:53.725957    5953 main.go:141] libmachine: (offline-docker-713000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000/hyperkit.pid
	I0926 18:27:53.726001    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Using UUID 966ed6f2-5b26-471c-a590-167879ce8a2a
	I0926 18:27:53.751790    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Generated MAC de:52:80:db:e4:3
	I0926 18:27:53.751808    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-713000
	I0926 18:27:53.751837    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:27:53 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"966ed6f2-5b26-471c-a590-167879ce8a2a", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I0926 18:27:53.751863    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:27:53 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"966ed6f2-5b26-471c-a590-167879ce8a2a", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I0926 18:27:53.751899    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:27:53 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "966ed6f2-5b26-471c-a590-167879ce8a2a", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000/offline-docker-713000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000/bzimage,
/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-713000"}
	I0926 18:27:53.751950    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:27:53 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 966ed6f2-5b26-471c-a590-167879ce8a2a -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000/offline-docker-713000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000/console-ring -f kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000/bzimage,/Users/jenkins/minikube-integration/19711-1128/.minikube/machi
nes/offline-docker-713000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-713000"
	I0926 18:27:53.751963    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:27:53 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0926 18:27:53.755025    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:27:53 DEBUG: hyperkit: Pid is 6157
	I0926 18:27:53.755494    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 0
	I0926 18:27:53.755507    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:27:53.755585    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6157
	I0926 18:27:53.756521    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for de:52:80:db:e4:3 in /var/db/dhcpd_leases ...
	I0926 18:27:53.756582    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:27:53.756593    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:27:53.756602    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:27:53.756610    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:27:53.756617    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:27:53.756623    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:27:53.756629    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:27:53.756636    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:27:53.756644    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:27:53.756653    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:27:53.756675    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:27:53.756693    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:27:53.756707    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:27:53.756721    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:27:53.756733    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:27:53.756744    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:27:53.756757    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:27:53.756770    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:27:53.756808    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:27:53.762495    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:27:53 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I0926 18:27:53.770564    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:27:53 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/offline-docker-713000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0926 18:27:53.771350    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:27:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 18:27:53.771366    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:27:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 18:27:53.771373    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:27:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 18:27:53.771378    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:27:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 18:27:54.148055    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:27:54 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0926 18:27:54.148070    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:27:54 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0926 18:27:54.262629    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:27:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 18:27:54.262655    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:27:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 18:27:54.262667    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:27:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 18:27:54.262679    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:27:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 18:27:54.263539    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:27:54 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0926 18:27:54.263550    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:27:54 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0926 18:27:55.758229    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 1
	I0926 18:27:55.758244    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:27:55.758349    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6157
	I0926 18:27:55.759182    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for de:52:80:db:e4:3 in /var/db/dhcpd_leases ...
	I0926 18:27:55.759238    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:27:55.759249    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:27:55.759257    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:27:55.759263    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:27:55.759270    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:27:55.759284    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:27:55.759291    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:27:55.759299    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:27:55.759307    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:27:55.759314    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:27:55.759322    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:27:55.759330    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:27:55.759337    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:27:55.759343    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:27:55.759359    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:27:55.759372    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:27:55.759383    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:27:55.759402    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:27:55.759412    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:27:57.760330    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 2
	I0926 18:27:57.760346    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:27:57.760434    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6157
	I0926 18:27:57.761343    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for de:52:80:db:e4:3 in /var/db/dhcpd_leases ...
	I0926 18:27:57.761367    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:27:57.761382    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:27:57.761391    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:27:57.761398    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:27:57.761405    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:27:57.761411    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:27:57.761417    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:27:57.761439    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:27:57.761448    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:27:57.761456    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:27:57.761464    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:27:57.761498    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:27:57.761517    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:27:57.761526    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:27:57.761535    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:27:57.761546    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:27:57.761555    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:27:57.761562    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:27:57.761570    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:27:59.661467    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:27:59 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0926 18:27:59.661640    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:27:59 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0926 18:27:59.661649    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:27:59 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0926 18:27:59.681407    5953 main.go:141] libmachine: (offline-docker-713000) DBG | 2024/09/26 18:27:59 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0926 18:27:59.762919    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 3
	I0926 18:27:59.762943    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:27:59.763109    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6157
	I0926 18:27:59.764783    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for de:52:80:db:e4:3 in /var/db/dhcpd_leases ...
	I0926 18:27:59.764896    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:27:59.764909    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:27:59.764921    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:27:59.764929    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:27:59.764938    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:27:59.764948    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:27:59.764965    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:27:59.764973    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:27:59.764981    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:27:59.764991    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:27:59.765005    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:27:59.765016    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:27:59.765026    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:27:59.765033    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:27:59.765062    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:27:59.765080    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:27:59.765093    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:27:59.765107    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:27:59.765134    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:28:01.766369    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 4
	I0926 18:28:01.766395    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:28:01.766472    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6157
	I0926 18:28:01.767302    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for de:52:80:db:e4:3 in /var/db/dhcpd_leases ...
	I0926 18:28:01.767341    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:28:01.767359    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:28:01.767373    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:28:01.767379    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:28:01.767390    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:28:01.767398    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:28:01.767404    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:28:01.767413    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:28:01.767428    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:28:01.767439    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:28:01.767446    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:28:01.767455    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:28:01.767461    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:28:01.767466    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:28:01.767479    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:28:01.767490    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:28:01.767500    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:28:01.767508    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:28:01.767516    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:28:03.768666    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 5
	I0926 18:28:03.768680    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:28:03.768766    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6157
	I0926 18:28:03.769553    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for de:52:80:db:e4:3 in /var/db/dhcpd_leases ...
	I0926 18:28:03.769612    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:28:03.769625    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:28:03.769638    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:28:03.769649    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:28:03.769656    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:28:03.769662    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:28:03.769670    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:28:03.769680    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:28:03.769699    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:28:03.769712    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:28:03.769719    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:28:03.769726    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:28:03.769733    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:28:03.769740    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:28:03.769755    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:28:03.769764    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:28:03.769773    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:28:03.769781    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:28:03.769789    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:28:05.771241    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 6
	I0926 18:28:05.771255    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:28:05.771302    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6157
	I0926 18:28:05.772423    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for de:52:80:db:e4:3 in /var/db/dhcpd_leases ...
	I0926 18:28:05.772475    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:28:05.772487    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:28:05.772495    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:28:05.772503    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:28:05.772511    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:28:05.772519    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:28:05.772526    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:28:05.772538    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:28:05.772552    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:28:05.772565    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:28:05.772577    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:28:05.772583    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:28:05.772589    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:28:05.772597    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:28:05.772610    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:28:05.772621    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:28:05.772633    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:28:05.772642    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:28:05.772650    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:28:07.774454    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 7
	I0926 18:28:07.774467    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:28:07.774534    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6157
	I0926 18:28:07.775446    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for de:52:80:db:e4:3 in /var/db/dhcpd_leases ...
	I0926 18:28:07.775489    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:28:07.775502    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:28:07.775510    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:28:07.775516    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:28:07.775523    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:28:07.775529    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:28:07.775551    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:28:07.775560    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:28:07.775567    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:28:07.775574    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:28:07.775581    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:28:07.775589    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:28:07.775605    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:28:07.775616    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:28:07.775625    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:28:07.775633    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:28:07.775646    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:28:07.775656    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:28:07.775666    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:28:09.776142    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 8
	I0926 18:28:09.776154    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:28:09.776232    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6157
	I0926 18:28:09.777338    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for de:52:80:db:e4:3 in /var/db/dhcpd_leases ...
	I0926 18:28:09.777387    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:28:09.777396    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:28:09.777417    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:28:09.777428    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:28:09.777435    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:28:09.777457    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:28:09.777469    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:28:09.777476    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:28:09.777483    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:28:09.777488    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:28:09.777507    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:28:09.777521    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:28:09.777531    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:28:09.777539    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:28:09.777552    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:28:09.777566    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:28:09.777589    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:28:09.777598    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:28:09.777607    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:28:11.777718    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 9
	I0926 18:28:11.777731    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:28:11.777866    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6157
	I0926 18:28:11.778707    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for de:52:80:db:e4:3 in /var/db/dhcpd_leases ...
	I0926 18:28:11.778743    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:28:11.778752    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:28:11.778759    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:28:11.778766    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:28:11.778774    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:28:11.778780    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:28:11.778786    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:28:11.778792    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:28:11.778804    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:28:11.778812    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:28:11.778819    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:28:11.778826    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:28:11.778835    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:28:11.778842    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:28:11.778849    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:28:11.778854    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:28:11.778861    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:28:11.778869    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:28:11.778877    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:28:13.780980    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 10
	I0926 18:28:13.780992    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:28:13.781051    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6157
	I0926 18:28:13.782139    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for de:52:80:db:e4:3 in /var/db/dhcpd_leases ...
	I0926 18:28:13.782187    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:28:13.782199    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:28:13.782208    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:28:13.782216    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:28:13.782223    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:28:13.782234    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:28:13.782242    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:28:13.782247    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:28:13.782253    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:28:13.782261    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:28:13.782267    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:28:13.782275    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:28:13.782293    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:28:13.782300    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:28:13.782307    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:28:13.782316    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:28:13.782326    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:28:13.782334    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:28:13.782343    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:28:15.783769    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 11
	I0926 18:28:15.783785    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:28:15.783859    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6157
	I0926 18:28:15.784659    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for de:52:80:db:e4:3 in /var/db/dhcpd_leases ...
	I0926 18:28:15.784706    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:28:15.784717    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:28:15.784745    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:28:15.784752    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:28:15.784776    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:28:15.784788    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:28:15.784798    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:28:15.784806    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:28:15.784822    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:28:15.784835    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:28:15.784845    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:28:15.784853    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:28:15.784860    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:28:15.784873    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:28:15.784880    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:28:15.784888    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:28:15.784904    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:28:15.784915    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:28:15.784924    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:28:17.786829    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 12
	I0926 18:28:17.786844    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:28:17.786884    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6157
	I0926 18:28:17.787961    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for de:52:80:db:e4:3 in /var/db/dhcpd_leases ...
	I0926 18:28:17.788015    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:28:17.788028    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:28:17.788055    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:28:17.788065    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:28:17.788072    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:28:17.788080    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:28:17.788089    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:28:17.788096    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:28:17.788110    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:28:17.788121    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:28:17.788138    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:28:17.788150    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:28:17.788157    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:28:17.788163    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:28:17.788183    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:28:17.788199    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:28:17.788211    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:28:17.788218    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:28:17.788226    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:28:19.789266    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 13
	I0926 18:28:19.789281    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:28:19.789320    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6157
	I0926 18:28:19.790225    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for de:52:80:db:e4:3 in /var/db/dhcpd_leases ...
	I0926 18:28:19.790277    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:28:19.790289    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:28:19.790305    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:28:19.790313    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:28:19.790321    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:28:19.790330    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:28:19.790337    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:28:19.790346    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:28:19.790368    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:28:19.790380    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:28:19.790391    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:28:19.790399    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:28:19.790405    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:28:19.790411    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:28:19.790419    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:28:19.790429    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:28:19.790436    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:28:19.790450    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:28:19.790459    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:28:21.791425    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 14
	I0926 18:28:21.791436    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:28:21.791531    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6157
	I0926 18:28:21.792361    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for de:52:80:db:e4:3 in /var/db/dhcpd_leases ...
	I0926 18:28:21.792403    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:28:21.792416    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:28:21.792426    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:28:21.792458    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:28:21.792470    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:28:21.792478    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:28:21.792494    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:28:21.792506    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:28:21.792513    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:28:21.792521    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:28:21.792531    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:28:21.792546    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:28:21.792553    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:28:21.792561    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:28:21.792568    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:28:21.792575    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:28:21.792592    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:28:21.792601    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:28:21.792610    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:28:23.794387    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 15
	I0926 18:28:23.794404    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:28:23.794472    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6157
	I0926 18:28:23.795285    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for de:52:80:db:e4:3 in /var/db/dhcpd_leases ...
	I0926 18:28:23.795333    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:28:23.795347    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:28:23.795360    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:28:23.795368    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:28:23.795374    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:28:23.795380    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:28:23.795395    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:28:23.795405    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:28:23.795412    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:28:23.795419    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:28:23.795425    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:28:23.795431    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:28:23.795437    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:28:23.795445    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:28:23.795462    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:28:23.795477    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:28:23.795492    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:28:23.795504    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:28:23.795513    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:28:25.797418    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 16
	I0926 18:28:25.797434    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:28:25.797493    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6157
	I0926 18:28:25.798341    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for de:52:80:db:e4:3 in /var/db/dhcpd_leases ...
	I0926 18:28:25.798349    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:28:25.798358    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:28:25.798363    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:28:25.798374    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:28:25.798382    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:28:25.798388    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:28:25.798394    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:28:25.798411    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:28:25.798424    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:28:25.798431    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:28:25.798439    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:28:25.798450    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:28:25.798464    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:28:25.798471    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:28:25.798478    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:28:25.798490    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:28:25.798500    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:28:25.798507    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:28:25.798515    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:28:27.800489    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 17
	I0926 18:28:27.800502    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:28:27.800555    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6157
	I0926 18:28:27.801480    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for de:52:80:db:e4:3 in /var/db/dhcpd_leases ...
	I0926 18:28:27.801529    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:28:27.801540    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:28:27.801549    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:28:27.801554    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:28:27.801571    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:28:27.801580    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:28:27.801588    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:28:27.801594    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:28:27.801610    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:28:27.801620    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:28:27.801628    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:28:27.801635    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:28:27.801640    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:28:27.801649    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:28:27.801657    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:28:27.801672    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:28:27.801695    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:28:27.801702    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:28:27.801710    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:28:29.803229    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 18
	I0926 18:28:29.803243    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:28:29.803314    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6157
	I0926 18:28:29.804177    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for de:52:80:db:e4:3 in /var/db/dhcpd_leases ...
	I0926 18:28:29.804218    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:28:29.804233    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:28:29.804245    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:28:29.804252    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:28:29.804274    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:28:29.804285    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:28:29.804300    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:28:29.804310    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:28:29.804325    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:28:29.804332    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:28:29.804338    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:28:29.804346    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:28:29.804358    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:28:29.804366    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:28:29.804382    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:28:29.804389    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:28:29.804396    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:28:29.804403    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:28:29.804412    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:28:31.806077    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 19
	I0926 18:28:31.806093    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:28:31.806176    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6157
	I0926 18:28:31.807289    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for de:52:80:db:e4:3 in /var/db/dhcpd_leases ...
	I0926 18:28:31.807329    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:28:31.807341    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:28:31.807367    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:28:31.807377    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:28:31.807385    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:28:31.807391    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:28:31.807397    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:28:31.807404    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:28:31.807411    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:28:31.807422    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:28:31.807431    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:28:31.807439    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:28:31.807463    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:28:31.807476    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:28:31.807483    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:28:31.807488    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:28:31.807495    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:28:31.807503    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:28:31.807515    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:28:33.808632    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 20
	I0926 18:28:33.808648    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:28:33.808691    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6157
	I0926 18:28:33.809486    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for de:52:80:db:e4:3 in /var/db/dhcpd_leases ...
	I0926 18:28:33.809539    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:28:33.809547    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:28:33.809566    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:28:33.809575    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:28:33.809583    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:28:33.809591    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:28:33.809598    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:28:33.809603    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:28:33.809609    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:28:33.809614    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:28:33.809620    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:28:33.809628    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:28:33.809633    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:28:33.809646    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:28:33.809657    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:28:33.809665    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:28:33.809672    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:28:33.809678    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:28:33.809683    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:28:35.811125    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 21
	I0926 18:28:35.811137    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:28:35.811207    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6157
	I0926 18:28:35.812040    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for de:52:80:db:e4:3 in /var/db/dhcpd_leases ...
	I0926 18:28:35.812075    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:28:35.812085    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:28:35.812093    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:28:35.812098    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:28:35.812114    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:28:35.812128    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:28:35.812135    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:28:35.812142    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:28:35.812154    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:28:35.812165    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:28:35.812172    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:28:35.812180    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:28:35.812194    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:28:35.812202    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:28:35.812210    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:28:35.812217    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:28:35.812224    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:28:35.812232    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:28:35.812240    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:28:37.814250    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 22
	I0926 18:28:37.814262    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:28:37.814318    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6157
	I0926 18:28:37.815162    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for de:52:80:db:e4:3 in /var/db/dhcpd_leases ...
	I0926 18:28:37.815208    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:28:37.815217    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:28:37.815225    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:28:37.815235    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:28:37.815259    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:28:37.815275    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:28:37.815288    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:28:37.815298    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:28:37.815306    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:28:37.815313    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:28:37.815321    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:28:37.815333    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:28:37.815340    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:28:37.815349    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:28:37.815357    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:28:37.815370    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:28:37.815384    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:28:37.815393    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:28:37.815400    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:28:39.817435    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 23
	I0926 18:28:39.817449    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:28:39.817489    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6157
	I0926 18:28:39.818299    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for de:52:80:db:e4:3 in /var/db/dhcpd_leases ...
	I0926 18:28:39.818334    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:28:39.818356    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:28:39.818362    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:28:39.818369    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:28:39.818375    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:28:39.818390    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:28:39.818414    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:28:39.818423    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:28:39.818432    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:28:39.818446    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:28:39.818458    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:28:39.818466    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:28:39.818473    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:28:39.818479    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:28:39.818487    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:28:39.818502    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:28:39.818510    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:28:39.818519    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:28:39.818527    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:28:41.819795    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 24
	I0926 18:28:41.819810    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:28:41.819874    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6157
	I0926 18:28:41.820649    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for de:52:80:db:e4:3 in /var/db/dhcpd_leases ...
	I0926 18:28:41.820704    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:28:41.820716    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:28:41.820725    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:28:41.820732    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:28:41.820738    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:28:41.820750    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:28:41.820765    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:28:41.820771    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:28:41.820777    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:28:41.820785    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:28:41.820799    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:28:41.820812    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:28:41.820830    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:28:41.820842    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:28:41.820849    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:28:41.820863    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:28:41.820870    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:28:41.820876    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:28:41.820892    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:28:43.820984    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 25
	I0926 18:28:43.820998    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:28:43.821110    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6157
	I0926 18:28:43.821939    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for de:52:80:db:e4:3 in /var/db/dhcpd_leases ...
	I0926 18:28:43.821991    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:28:43.822002    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:28:43.822011    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:28:43.822018    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:28:43.822024    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:28:43.822029    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:28:43.822036    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:28:43.822042    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:28:43.822049    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:28:43.822063    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:28:43.822071    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:28:43.822089    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:28:43.822101    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:28:43.822109    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:28:43.822114    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:28:43.822127    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:28:43.822150    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:28:43.822160    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:28:43.822179    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:28:45.822391    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 26
	I0926 18:28:45.822407    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:28:45.822473    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6157
	I0926 18:28:45.823268    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for de:52:80:db:e4:3 in /var/db/dhcpd_leases ...
	I0926 18:28:45.823318    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:28:45.823326    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:28:45.823338    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:28:45.823349    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:28:45.823357    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:28:45.823363    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:28:45.823369    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:28:45.823377    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:28:45.823390    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:28:45.823399    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:28:45.823411    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:28:45.823428    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:28:45.823441    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:28:45.823449    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:28:45.823475    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:28:45.823483    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:28:45.823490    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:28:45.823503    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:28:45.823511    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:28:47.824213    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 27
	I0926 18:28:47.824225    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:28:47.824329    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6157
	I0926 18:28:47.825169    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for de:52:80:db:e4:3 in /var/db/dhcpd_leases ...
	I0926 18:28:47.825215    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:28:47.825225    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:28:47.825234    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:28:47.825240    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:28:47.825256    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:28:47.825269    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:28:47.825276    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:28:47.825283    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:28:47.825289    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:28:47.825297    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:28:47.825305    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:28:47.825319    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:28:47.825326    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:28:47.825333    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:28:47.825340    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:28:47.825347    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:28:47.825357    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:28:47.825364    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:28:47.825372    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:28:49.826051    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 28
	I0926 18:28:49.826064    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:28:49.826118    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6157
	I0926 18:28:49.826920    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for de:52:80:db:e4:3 in /var/db/dhcpd_leases ...
	I0926 18:28:49.826964    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:28:49.826974    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:28:49.826992    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:28:49.827004    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:28:49.827011    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:28:49.827018    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:28:49.827025    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:28:49.827031    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:28:49.827040    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:28:49.827056    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:28:49.827067    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:28:49.827076    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:28:49.827083    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:28:49.827090    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:28:49.827098    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:28:49.827105    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:28:49.827120    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:28:49.827128    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:28:49.827136    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:28:51.829193    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Attempt 29
	I0926 18:28:51.829207    5953 main.go:141] libmachine: (offline-docker-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:28:51.829269    5953 main.go:141] libmachine: (offline-docker-713000) DBG | hyperkit pid from json: 6157
	I0926 18:28:51.830131    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Searching for de:52:80:db:e4:3 in /var/db/dhcpd_leases ...
	I0926 18:28:51.830168    5953 main.go:141] libmachine: (offline-docker-713000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:28:51.830176    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:28:51.830186    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:28:51.830193    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:28:51.830202    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:28:51.830209    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:28:51.830216    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:28:51.830222    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:28:51.830228    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:28:51.830234    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:28:51.830254    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:28:51.830267    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:28:51.830298    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:28:51.830307    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:28:51.830314    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:28:51.830321    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:28:51.830340    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:28:51.830352    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:28:51.830361    5953 main.go:141] libmachine: (offline-docker-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:28:53.830583    5953 client.go:171] duration metric: took 1m0.978637231s to LocalClient.Create
	I0926 18:28:55.831205    5953 start.go:128] duration metric: took 1m3.011103812s to createHost
	I0926 18:28:55.831254    5953 start.go:83] releasing machines lock for "offline-docker-713000", held for 1m3.011230656s
	W0926 18:28:55.831370    5953 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p offline-docker-713000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for de:52:80:db:e4:3
	* Failed to start hyperkit VM. Running "minikube delete -p offline-docker-713000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for de:52:80:db:e4:3
	I0926 18:28:55.894687    5953 out.go:201] 
	W0926 18:28:55.915487    5953 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for de:52:80:db:e4:3
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for de:52:80:db:e4:3
	W0926 18:28:55.915503    5953 out.go:270] * 
	* 
	W0926 18:28:55.916127    5953 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 18:28:55.978464    5953 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-amd64 start -p offline-docker-713000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-09-26 18:28:56.106675 -0700 PDT m=+4498.841591578
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-713000 -n offline-docker-713000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-713000 -n offline-docker-713000: exit status 7 (82.999031ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0926 18:28:56.187786    6172 status.go:386] failed to get driver ip: getting IP: IP address is not set
	E0926 18:28:56.187809    6172 status.go:119] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-713000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "offline-docker-713000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-713000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-713000: (5.25029001s)
--- FAIL: TestOffline (195.51s)

                                                
                                    
x
+
TestAddons/parallel/Registry (74.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 6.447406ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-gdmdl" [b49ae8a8-4cbc-4a75-8913-e8be3cc60c32] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004486179s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-nkz2s" [516b6f7b-4fac-4c3f-b845-0484389422ee] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.006349581s
addons_test.go:338: (dbg) Run:  kubectl --context addons-433000 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-433000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-433000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.079099849s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-433000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-darwin-amd64 -p addons-433000 ip
2024/09/26 17:28:14 [DEBUG] GET http://192.169.0.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-darwin-amd64 -p addons-433000 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p addons-433000 -n addons-433000
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p addons-433000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p addons-433000 logs -n 25: (2.270381371s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-592000 | jenkins | v1.34.0 | 26 Sep 24 17:13 PDT |                     |
	|         | -p download-only-592000                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |         |         |                     |                     |
	|         | --driver=hyperkit                                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 26 Sep 24 17:14 PDT | 26 Sep 24 17:14 PDT |
	| delete  | -p download-only-592000                                                                     | download-only-592000 | jenkins | v1.34.0 | 26 Sep 24 17:14 PDT | 26 Sep 24 17:14 PDT |
	| start   | -o=json --download-only                                                                     | download-only-120000 | jenkins | v1.34.0 | 26 Sep 24 17:14 PDT |                     |
	|         | -p download-only-120000                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |         |         |                     |                     |
	|         | --driver=hyperkit                                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 26 Sep 24 17:14 PDT | 26 Sep 24 17:14 PDT |
	| delete  | -p download-only-120000                                                                     | download-only-120000 | jenkins | v1.34.0 | 26 Sep 24 17:14 PDT | 26 Sep 24 17:14 PDT |
	| delete  | -p download-only-592000                                                                     | download-only-592000 | jenkins | v1.34.0 | 26 Sep 24 17:14 PDT | 26 Sep 24 17:14 PDT |
	| delete  | -p download-only-120000                                                                     | download-only-120000 | jenkins | v1.34.0 | 26 Sep 24 17:14 PDT | 26 Sep 24 17:14 PDT |
	| start   | --download-only -p                                                                          | binary-mirror-780000 | jenkins | v1.34.0 | 26 Sep 24 17:14 PDT |                     |
	|         | binary-mirror-780000                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49641                                                                      |                      |         |         |                     |                     |
	|         | --driver=hyperkit                                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-780000                                                                     | binary-mirror-780000 | jenkins | v1.34.0 | 26 Sep 24 17:14 PDT | 26 Sep 24 17:14 PDT |
	| addons  | disable dashboard -p                                                                        | addons-433000        | jenkins | v1.34.0 | 26 Sep 24 17:14 PDT |                     |
	|         | addons-433000                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-433000        | jenkins | v1.34.0 | 26 Sep 24 17:14 PDT |                     |
	|         | addons-433000                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-433000 --wait=true                                                                | addons-433000        | jenkins | v1.34.0 | 26 Sep 24 17:14 PDT | 26 Sep 24 17:18 PDT |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=hyperkit  --addons=ingress                                                         |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| addons  | addons-433000 addons disable                                                                | addons-433000        | jenkins | v1.34.0 | 26 Sep 24 17:18 PDT | 26 Sep 24 17:18 PDT |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-433000 addons disable                                                                | addons-433000        | jenkins | v1.34.0 | 26 Sep 24 17:27 PDT | 26 Sep 24 17:27 PDT |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-433000        | jenkins | v1.34.0 | 26 Sep 24 17:27 PDT | 26 Sep 24 17:27 PDT |
	|         | -p addons-433000                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-433000 ssh cat                                                                       | addons-433000        | jenkins | v1.34.0 | 26 Sep 24 17:27 PDT | 26 Sep 24 17:27 PDT |
	|         | /opt/local-path-provisioner/pvc-6ac8fe22-befd-49d4-b839-536d0bf298f4_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-433000 addons disable                                                                | addons-433000        | jenkins | v1.34.0 | 26 Sep 24 17:27 PDT | 26 Sep 24 17:28 PDT |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-433000 ip                                                                            | addons-433000        | jenkins | v1.34.0 | 26 Sep 24 17:28 PDT | 26 Sep 24 17:28 PDT |
	| addons  | addons-433000 addons disable                                                                | addons-433000        | jenkins | v1.34.0 | 26 Sep 24 17:28 PDT | 26 Sep 24 17:28 PDT |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/26 17:14:27
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 17:14:27.792730    1767 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:14:27.793537    1767 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:14:27.793552    1767 out.go:358] Setting ErrFile to fd 2...
	I0926 17:14:27.793559    1767 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:14:27.794035    1767 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1128/.minikube/bin
	I0926 17:14:27.795621    1767 out.go:352] Setting JSON to false
	I0926 17:14:27.818858    1767 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":837,"bootTime":1727395230,"procs":437,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0926 17:14:27.818995    1767 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 17:14:27.840477    1767 out.go:177] * [addons-433000] minikube v1.34.0 on Darwin 14.6.1
	I0926 17:14:27.882294    1767 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 17:14:27.882360    1767 notify.go:220] Checking for updates...
	I0926 17:14:27.924357    1767 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1128/kubeconfig
	I0926 17:14:27.945443    1767 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0926 17:14:27.966308    1767 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 17:14:27.987320    1767 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1128/.minikube
	I0926 17:14:28.029033    1767 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 17:14:28.049961    1767 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 17:14:28.079406    1767 out.go:177] * Using the hyperkit driver based on user configuration
	I0926 17:14:28.120396    1767 start.go:297] selected driver: hyperkit
	I0926 17:14:28.120444    1767 start.go:901] validating driver "hyperkit" against <nil>
	I0926 17:14:28.120458    1767 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 17:14:28.123634    1767 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 17:14:28.123758    1767 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19711-1128/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0926 17:14:28.132336    1767 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0926 17:14:28.136256    1767 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:14:28.136276    1767 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0926 17:14:28.136306    1767 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0926 17:14:28.136567    1767 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 17:14:28.136605    1767 cni.go:84] Creating CNI manager for ""
	I0926 17:14:28.136644    1767 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 17:14:28.136651    1767 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0926 17:14:28.136719    1767 start.go:340] cluster config:
	{Name:addons-433000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-433000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHA
gentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 17:14:28.136836    1767 iso.go:125] acquiring lock: {Name:mka8a9c5a237c1e4ae233281d2ff7965d13a843d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 17:14:28.179139    1767 out.go:177] * Starting "addons-433000" primary control-plane node in "addons-433000" cluster
	I0926 17:14:28.199252    1767 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 17:14:28.199347    1767 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0926 17:14:28.199373    1767 cache.go:56] Caching tarball of preloaded images
	I0926 17:14:28.199560    1767 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0926 17:14:28.199595    1767 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 17:14:28.199902    1767 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/config.json ...
	I0926 17:14:28.199921    1767 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/config.json: {Name:mk2d585c134e8aff91dd11a0f885b6a9ed34fb3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:14:28.200274    1767 start.go:360] acquireMachinesLock for addons-433000: {Name:mk62b0ec9b6170d1971239573141a625c4a2621e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 17:14:28.200443    1767 start.go:364] duration metric: took 156.271µs to acquireMachinesLock for "addons-433000"
	I0926 17:14:28.200498    1767 start.go:93] Provisioning new machine with config: &{Name:addons-433000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:addons-433000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 17:14:28.200590    1767 start.go:125] createHost starting for "" (driver="hyperkit")
	I0926 17:14:28.242115    1767 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0926 17:14:28.242251    1767 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:14:28.242284    1767 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:14:28.250890    1767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49648
	I0926 17:14:28.251225    1767 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:14:28.251614    1767 main.go:141] libmachine: Using API Version  1
	I0926 17:14:28.251626    1767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:14:28.251871    1767 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:14:28.251992    1767 main.go:141] libmachine: (addons-433000) Calling .GetMachineName
	I0926 17:14:28.252079    1767 main.go:141] libmachine: (addons-433000) Calling .DriverName
	I0926 17:14:28.252175    1767 start.go:159] libmachine.API.Create for "addons-433000" (driver="hyperkit")
	I0926 17:14:28.252198    1767 client.go:168] LocalClient.Create starting
	I0926 17:14:28.252235    1767 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem
	I0926 17:14:28.386309    1767 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem
	I0926 17:14:28.521676    1767 main.go:141] libmachine: Running pre-create checks...
	I0926 17:14:28.521688    1767 main.go:141] libmachine: (addons-433000) Calling .PreCreateCheck
	I0926 17:14:28.521855    1767 main.go:141] libmachine: (addons-433000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:14:28.521981    1767 main.go:141] libmachine: (addons-433000) Calling .GetConfigRaw
	I0926 17:14:28.522461    1767 main.go:141] libmachine: Creating machine...
	I0926 17:14:28.522479    1767 main.go:141] libmachine: (addons-433000) Calling .Create
	I0926 17:14:28.522569    1767 main.go:141] libmachine: (addons-433000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:14:28.522683    1767 main.go:141] libmachine: (addons-433000) DBG | I0926 17:14:28.522555    1776 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19711-1128/.minikube
	I0926 17:14:28.522759    1767 main.go:141] libmachine: (addons-433000) Downloading /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1128/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0926 17:14:28.787809    1767 main.go:141] libmachine: (addons-433000) DBG | I0926 17:14:28.787700    1776 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/addons-433000/id_rsa...
	I0926 17:14:29.109240    1767 main.go:141] libmachine: (addons-433000) DBG | I0926 17:14:29.109180    1776 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/addons-433000/addons-433000.rawdisk...
	I0926 17:14:29.109255    1767 main.go:141] libmachine: (addons-433000) DBG | Writing magic tar header
	I0926 17:14:29.109265    1767 main.go:141] libmachine: (addons-433000) DBG | Writing SSH key tar header
	I0926 17:14:29.109611    1767 main.go:141] libmachine: (addons-433000) DBG | I0926 17:14:29.109556    1776 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/addons-433000 ...
	I0926 17:14:29.632514    1767 main.go:141] libmachine: (addons-433000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:14:29.632535    1767 main.go:141] libmachine: (addons-433000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/addons-433000/hyperkit.pid
	I0926 17:14:29.632568    1767 main.go:141] libmachine: (addons-433000) DBG | Using UUID 81e76aed-7cf2-4e32-882d-0e8e70da50ed
	I0926 17:14:29.873172    1767 main.go:141] libmachine: (addons-433000) DBG | Generated MAC 8a:7e:35:69:36:a6
	I0926 17:14:29.873193    1767 main.go:141] libmachine: (addons-433000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=addons-433000
	I0926 17:14:29.873227    1767 main.go:141] libmachine: (addons-433000) DBG | 2024/09/26 17:14:29 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/addons-433000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"81e76aed-7cf2-4e32-882d-0e8e70da50ed", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/addons-433000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/addons-433000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/addons-433000/initrd", Bootrom:"", CPUs:2, Memory:4000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0926 17:14:29.873252    1767 main.go:141] libmachine: (addons-433000) DBG | 2024/09/26 17:14:29 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/addons-433000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"81e76aed-7cf2-4e32-882d-0e8e70da50ed", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/addons-433000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/addons-433000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/addons-433000/initrd", Bootrom:"", CPUs:2, Memory:4000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0926 17:14:29.873308    1767 main.go:141] libmachine: (addons-433000) DBG | 2024/09/26 17:14:29 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/addons-433000/hyperkit.pid", "-c", "2", "-m", "4000M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "81e76aed-7cf2-4e32-882d-0e8e70da50ed", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/addons-433000/addons-433000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/addons-433000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/addons-433000/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/addons-433000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/addons-433000/bzimage,/Users/jenkins/minikube-integration/19711-1128/.minikube/machine
s/addons-433000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=addons-433000"}
	I0926 17:14:29.873352    1767 main.go:141] libmachine: (addons-433000) DBG | 2024/09/26 17:14:29 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/addons-433000/hyperkit.pid -c 2 -m 4000M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 81e76aed-7cf2-4e32-882d-0e8e70da50ed -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/addons-433000/addons-433000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/addons-433000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/addons-433000/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/addons-433000/console-ring -f kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/addons-433000/bzimage,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/addons-433000/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=addons-433000"
	I0926 17:14:29.873367    1767 main.go:141] libmachine: (addons-433000) DBG | 2024/09/26 17:14:29 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0926 17:14:29.876411    1767 main.go:141] libmachine: (addons-433000) DBG | 2024/09/26 17:14:29 DEBUG: hyperkit: Pid is 1782
	I0926 17:14:29.876868    1767 main.go:141] libmachine: (addons-433000) DBG | Attempt 0
	I0926 17:14:29.876882    1767 main.go:141] libmachine: (addons-433000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:14:29.876936    1767 main.go:141] libmachine: (addons-433000) DBG | hyperkit pid from json: 1782
	I0926 17:14:29.877801    1767 main.go:141] libmachine: (addons-433000) DBG | Searching for 8a:7e:35:69:36:a6 in /var/db/dhcpd_leases ...
	I0926 17:14:29.894100    1767 main.go:141] libmachine: (addons-433000) DBG | 2024/09/26 17:14:29 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0926 17:14:29.951011    1767 main.go:141] libmachine: (addons-433000) DBG | 2024/09/26 17:14:29 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/addons-433000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0926 17:14:29.951682    1767 main.go:141] libmachine: (addons-433000) DBG | 2024/09/26 17:14:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 17:14:29.951696    1767 main.go:141] libmachine: (addons-433000) DBG | 2024/09/26 17:14:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 17:14:29.951703    1767 main.go:141] libmachine: (addons-433000) DBG | 2024/09/26 17:14:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 17:14:29.951709    1767 main.go:141] libmachine: (addons-433000) DBG | 2024/09/26 17:14:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 17:14:30.485841    1767 main.go:141] libmachine: (addons-433000) DBG | 2024/09/26 17:14:30 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0926 17:14:30.485861    1767 main.go:141] libmachine: (addons-433000) DBG | 2024/09/26 17:14:30 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0926 17:14:30.602808    1767 main.go:141] libmachine: (addons-433000) DBG | 2024/09/26 17:14:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 17:14:30.602832    1767 main.go:141] libmachine: (addons-433000) DBG | 2024/09/26 17:14:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 17:14:30.602843    1767 main.go:141] libmachine: (addons-433000) DBG | 2024/09/26 17:14:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 17:14:30.602850    1767 main.go:141] libmachine: (addons-433000) DBG | 2024/09/26 17:14:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 17:14:30.603653    1767 main.go:141] libmachine: (addons-433000) DBG | 2024/09/26 17:14:30 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0926 17:14:30.603681    1767 main.go:141] libmachine: (addons-433000) DBG | 2024/09/26 17:14:30 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0926 17:14:31.879431    1767 main.go:141] libmachine: (addons-433000) DBG | Attempt 1
	I0926 17:14:31.879453    1767 main.go:141] libmachine: (addons-433000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:14:31.879575    1767 main.go:141] libmachine: (addons-433000) DBG | hyperkit pid from json: 1782
	I0926 17:14:31.880377    1767 main.go:141] libmachine: (addons-433000) DBG | Searching for 8a:7e:35:69:36:a6 in /var/db/dhcpd_leases ...
	I0926 17:14:33.881976    1767 main.go:141] libmachine: (addons-433000) DBG | Attempt 2
	I0926 17:14:33.881992    1767 main.go:141] libmachine: (addons-433000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:14:33.882085    1767 main.go:141] libmachine: (addons-433000) DBG | hyperkit pid from json: 1782
	I0926 17:14:33.882880    1767 main.go:141] libmachine: (addons-433000) DBG | Searching for 8a:7e:35:69:36:a6 in /var/db/dhcpd_leases ...
	I0926 17:14:35.883002    1767 main.go:141] libmachine: (addons-433000) DBG | Attempt 3
	I0926 17:14:35.883018    1767 main.go:141] libmachine: (addons-433000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:14:35.883097    1767 main.go:141] libmachine: (addons-433000) DBG | hyperkit pid from json: 1782
	I0926 17:14:35.883951    1767 main.go:141] libmachine: (addons-433000) DBG | Searching for 8a:7e:35:69:36:a6 in /var/db/dhcpd_leases ...
	I0926 17:14:36.401039    1767 main.go:141] libmachine: (addons-433000) DBG | 2024/09/26 17:14:36 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0926 17:14:36.401097    1767 main.go:141] libmachine: (addons-433000) DBG | 2024/09/26 17:14:36 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0926 17:14:36.401106    1767 main.go:141] libmachine: (addons-433000) DBG | 2024/09/26 17:14:36 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0926 17:14:36.419367    1767 main.go:141] libmachine: (addons-433000) DBG | 2024/09/26 17:14:36 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0926 17:14:37.885415    1767 main.go:141] libmachine: (addons-433000) DBG | Attempt 4
	I0926 17:14:37.885430    1767 main.go:141] libmachine: (addons-433000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:14:37.885506    1767 main.go:141] libmachine: (addons-433000) DBG | hyperkit pid from json: 1782
	I0926 17:14:37.886262    1767 main.go:141] libmachine: (addons-433000) DBG | Searching for 8a:7e:35:69:36:a6 in /var/db/dhcpd_leases ...
	I0926 17:14:39.886471    1767 main.go:141] libmachine: (addons-433000) DBG | Attempt 5
	I0926 17:14:39.886488    1767 main.go:141] libmachine: (addons-433000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:14:39.886600    1767 main.go:141] libmachine: (addons-433000) DBG | hyperkit pid from json: 1782
	I0926 17:14:39.887624    1767 main.go:141] libmachine: (addons-433000) DBG | Searching for 8a:7e:35:69:36:a6 in /var/db/dhcpd_leases ...
	I0926 17:14:39.887687    1767 main.go:141] libmachine: (addons-433000) DBG | Found 1 entries in /var/db/dhcpd_leases!
	I0926 17:14:39.887703    1767 main.go:141] libmachine: (addons-433000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 17:14:39.887709    1767 main.go:141] libmachine: (addons-433000) DBG | Found match: 8a:7e:35:69:36:a6
	I0926 17:14:39.887720    1767 main.go:141] libmachine: (addons-433000) DBG | IP: 192.169.0.2
	I0926 17:14:39.887786    1767 main.go:141] libmachine: (addons-433000) Calling .GetConfigRaw
	I0926 17:14:39.888418    1767 main.go:141] libmachine: (addons-433000) Calling .DriverName
	I0926 17:14:39.888515    1767 main.go:141] libmachine: (addons-433000) Calling .DriverName
	I0926 17:14:39.888601    1767 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0926 17:14:39.888610    1767 main.go:141] libmachine: (addons-433000) Calling .GetState
	I0926 17:14:39.888679    1767 main.go:141] libmachine: (addons-433000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:14:39.888733    1767 main.go:141] libmachine: (addons-433000) DBG | hyperkit pid from json: 1782
	I0926 17:14:39.889463    1767 main.go:141] libmachine: Detecting operating system of created instance...
	I0926 17:14:39.889474    1767 main.go:141] libmachine: Waiting for SSH to be available...
	I0926 17:14:39.889481    1767 main.go:141] libmachine: Getting to WaitForSSH function...
	I0926 17:14:39.889485    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHHostname
	I0926 17:14:39.889555    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHPort
	I0926 17:14:39.889634    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHKeyPath
	I0926 17:14:39.889713    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHKeyPath
	I0926 17:14:39.889783    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHUsername
	I0926 17:14:39.890922    1767 main.go:141] libmachine: Using SSH client type: native
	I0926 17:14:39.891074    1767 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf969d00] 0xf96c9e0 <nil>  [] 0s} 192.169.0.2 22 <nil> <nil>}
	I0926 17:14:39.891080    1767 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0926 17:14:40.892113    1767 main.go:141] libmachine: Error dialing TCP: dial tcp 192.169.0.2:22: connect: connection refused
	I0926 17:14:43.946218    1767 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 17:14:43.946230    1767 main.go:141] libmachine: Detecting the provisioner...
	I0926 17:14:43.946236    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHHostname
	I0926 17:14:43.946378    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHPort
	I0926 17:14:43.946473    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHKeyPath
	I0926 17:14:43.946552    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHKeyPath
	I0926 17:14:43.946648    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHUsername
	I0926 17:14:43.946808    1767 main.go:141] libmachine: Using SSH client type: native
	I0926 17:14:43.946961    1767 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf969d00] 0xf96c9e0 <nil>  [] 0s} 192.169.0.2 22 <nil> <nil>}
	I0926 17:14:43.946969    1767 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0926 17:14:44.000625    1767 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0926 17:14:44.000704    1767 main.go:141] libmachine: found compatible host: buildroot
	I0926 17:14:44.000710    1767 main.go:141] libmachine: Provisioning with buildroot...
	I0926 17:14:44.000716    1767 main.go:141] libmachine: (addons-433000) Calling .GetMachineName
	I0926 17:14:44.000845    1767 buildroot.go:166] provisioning hostname "addons-433000"
	I0926 17:14:44.000856    1767 main.go:141] libmachine: (addons-433000) Calling .GetMachineName
	I0926 17:14:44.000954    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHHostname
	I0926 17:14:44.001087    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHPort
	I0926 17:14:44.001179    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHKeyPath
	I0926 17:14:44.001281    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHKeyPath
	I0926 17:14:44.001360    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHUsername
	I0926 17:14:44.001490    1767 main.go:141] libmachine: Using SSH client type: native
	I0926 17:14:44.001655    1767 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf969d00] 0xf96c9e0 <nil>  [] 0s} 192.169.0.2 22 <nil> <nil>}
	I0926 17:14:44.001666    1767 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-433000 && echo "addons-433000" | sudo tee /etc/hostname
	I0926 17:14:44.063387    1767 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-433000
	
	I0926 17:14:44.063406    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHHostname
	I0926 17:14:44.063538    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHPort
	I0926 17:14:44.063649    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHKeyPath
	I0926 17:14:44.063756    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHKeyPath
	I0926 17:14:44.063855    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHUsername
	I0926 17:14:44.063987    1767 main.go:141] libmachine: Using SSH client type: native
	I0926 17:14:44.064124    1767 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf969d00] 0xf96c9e0 <nil>  [] 0s} 192.169.0.2 22 <nil> <nil>}
	I0926 17:14:44.064135    1767 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-433000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-433000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-433000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0926 17:14:44.122193    1767 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 17:14:44.122215    1767 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19711-1128/.minikube CaCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19711-1128/.minikube}
	I0926 17:14:44.122226    1767 buildroot.go:174] setting up certificates
	I0926 17:14:44.122235    1767 provision.go:84] configureAuth start
	I0926 17:14:44.122248    1767 main.go:141] libmachine: (addons-433000) Calling .GetMachineName
	I0926 17:14:44.122389    1767 main.go:141] libmachine: (addons-433000) Calling .GetIP
	I0926 17:14:44.122473    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHHostname
	I0926 17:14:44.122569    1767 provision.go:143] copyHostCerts
	I0926 17:14:44.122951    1767 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem (1123 bytes)
	I0926 17:14:44.123279    1767 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem (1675 bytes)
	I0926 17:14:44.123456    1767 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem (1082 bytes)
	I0926 17:14:44.123633    1767 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem org=jenkins.addons-433000 san=[127.0.0.1 192.169.0.2 addons-433000 localhost minikube]
	I0926 17:14:44.411054    1767 provision.go:177] copyRemoteCerts
	I0926 17:14:44.411570    1767 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0926 17:14:44.411604    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHHostname
	I0926 17:14:44.411796    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHPort
	I0926 17:14:44.411888    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHKeyPath
	I0926 17:14:44.411979    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHUsername
	I0926 17:14:44.412075    1767 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/addons-433000/id_rsa Username:docker}
	I0926 17:14:44.444595    1767 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0926 17:14:44.464453    1767 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0926 17:14:44.484281    1767 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0926 17:14:44.504281    1767 provision.go:87] duration metric: took 382.027885ms to configureAuth
	I0926 17:14:44.504296    1767 buildroot.go:189] setting minikube options for container-runtime
	I0926 17:14:44.504437    1767 config.go:182] Loaded profile config "addons-433000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:14:44.504456    1767 main.go:141] libmachine: (addons-433000) Calling .DriverName
	I0926 17:14:44.504589    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHHostname
	I0926 17:14:44.504686    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHPort
	I0926 17:14:44.504797    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHKeyPath
	I0926 17:14:44.504872    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHKeyPath
	I0926 17:14:44.504973    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHUsername
	I0926 17:14:44.505120    1767 main.go:141] libmachine: Using SSH client type: native
	I0926 17:14:44.505260    1767 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf969d00] 0xf96c9e0 <nil>  [] 0s} 192.169.0.2 22 <nil> <nil>}
	I0926 17:14:44.505268    1767 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0926 17:14:44.557230    1767 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0926 17:14:44.557248    1767 buildroot.go:70] root file system type: tmpfs
	I0926 17:14:44.557321    1767 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0926 17:14:44.557334    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHHostname
	I0926 17:14:44.557465    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHPort
	I0926 17:14:44.557559    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHKeyPath
	I0926 17:14:44.557650    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHKeyPath
	I0926 17:14:44.557730    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHUsername
	I0926 17:14:44.557866    1767 main.go:141] libmachine: Using SSH client type: native
	I0926 17:14:44.558006    1767 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf969d00] 0xf96c9e0 <nil>  [] 0s} 192.169.0.2 22 <nil> <nil>}
	I0926 17:14:44.558055    1767 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0926 17:14:44.620582    1767 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0926 17:14:44.620603    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHHostname
	I0926 17:14:44.620738    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHPort
	I0926 17:14:44.620819    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHKeyPath
	I0926 17:14:44.620908    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHKeyPath
	I0926 17:14:44.621002    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHUsername
	I0926 17:14:44.621146    1767 main.go:141] libmachine: Using SSH client type: native
	I0926 17:14:44.621275    1767 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf969d00] 0xf96c9e0 <nil>  [] 0s} 192.169.0.2 22 <nil> <nil>}
	I0926 17:14:44.621286    1767 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0926 17:14:46.221022    1767 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0926 17:14:46.221036    1767 main.go:141] libmachine: Checking connection to Docker...
	I0926 17:14:46.221043    1767 main.go:141] libmachine: (addons-433000) Calling .GetURL
	I0926 17:14:46.221190    1767 main.go:141] libmachine: Docker is up and running!
	I0926 17:14:46.221198    1767 main.go:141] libmachine: Reticulating splines...
	I0926 17:14:46.221203    1767 client.go:171] duration metric: took 17.968828855s to LocalClient.Create
	I0926 17:14:46.221212    1767 start.go:167] duration metric: took 17.968868046s to libmachine.API.Create "addons-433000"
	I0926 17:14:46.221221    1767 start.go:293] postStartSetup for "addons-433000" (driver="hyperkit")
	I0926 17:14:46.221228    1767 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0926 17:14:46.221245    1767 main.go:141] libmachine: (addons-433000) Calling .DriverName
	I0926 17:14:46.221396    1767 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0926 17:14:46.221412    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHHostname
	I0926 17:14:46.221510    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHPort
	I0926 17:14:46.221600    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHKeyPath
	I0926 17:14:46.221690    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHUsername
	I0926 17:14:46.221783    1767 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/addons-433000/id_rsa Username:docker}
	I0926 17:14:46.254955    1767 ssh_runner.go:195] Run: cat /etc/os-release
	I0926 17:14:46.258321    1767 info.go:137] Remote host: Buildroot 2023.02.9
	I0926 17:14:46.258336    1767 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1128/.minikube/addons for local assets ...
	I0926 17:14:46.258645    1767 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1128/.minikube/files for local assets ...
	I0926 17:14:46.258698    1767 start.go:296] duration metric: took 37.471843ms for postStartSetup
	I0926 17:14:46.258728    1767 main.go:141] libmachine: (addons-433000) Calling .GetConfigRaw
	I0926 17:14:46.259293    1767 main.go:141] libmachine: (addons-433000) Calling .GetIP
	I0926 17:14:46.259424    1767 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/config.json ...
	I0926 17:14:46.259768    1767 start.go:128] duration metric: took 18.058980561s to createHost
	I0926 17:14:46.259784    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHHostname
	I0926 17:14:46.259866    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHPort
	I0926 17:14:46.259972    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHKeyPath
	I0926 17:14:46.260062    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHKeyPath
	I0926 17:14:46.260147    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHUsername
	I0926 17:14:46.260269    1767 main.go:141] libmachine: Using SSH client type: native
	I0926 17:14:46.260396    1767 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf969d00] 0xf96c9e0 <nil>  [] 0s} 192.169.0.2 22 <nil> <nil>}
	I0926 17:14:46.260403    1767 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0926 17:14:46.311655    1767 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727396086.398339860
	
	I0926 17:14:46.311668    1767 fix.go:216] guest clock: 1727396086.398339860
	I0926 17:14:46.311673    1767 fix.go:229] Guest: 2024-09-26 17:14:46.39833986 -0700 PDT Remote: 2024-09-26 17:14:46.259777 -0700 PDT m=+18.501143885 (delta=138.56286ms)
	I0926 17:14:46.311693    1767 fix.go:200] guest clock delta is within tolerance: 138.56286ms
	I0926 17:14:46.311697    1767 start.go:83] releasing machines lock for "addons-433000", held for 18.111070956s
	I0926 17:14:46.311714    1767 main.go:141] libmachine: (addons-433000) Calling .DriverName
	I0926 17:14:46.311849    1767 main.go:141] libmachine: (addons-433000) Calling .GetIP
	I0926 17:14:46.311934    1767 main.go:141] libmachine: (addons-433000) Calling .DriverName
	I0926 17:14:46.312247    1767 main.go:141] libmachine: (addons-433000) Calling .DriverName
	I0926 17:14:46.312347    1767 main.go:141] libmachine: (addons-433000) Calling .DriverName
	I0926 17:14:46.312479    1767 ssh_runner.go:195] Run: cat /version.json
	I0926 17:14:46.312492    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHHostname
	I0926 17:14:46.312579    1767 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0926 17:14:46.312581    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHPort
	I0926 17:14:46.312608    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHHostname
	I0926 17:14:46.312676    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHPort
	I0926 17:14:46.312694    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHKeyPath
	I0926 17:14:46.312772    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHKeyPath
	I0926 17:14:46.312789    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHUsername
	I0926 17:14:46.312844    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHUsername
	I0926 17:14:46.312871    1767 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/addons-433000/id_rsa Username:docker}
	I0926 17:14:46.312926    1767 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/addons-433000/id_rsa Username:docker}
	I0926 17:14:46.396746    1767 ssh_runner.go:195] Run: systemctl --version
	I0926 17:14:46.401248    1767 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0926 17:14:46.405576    1767 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0926 17:14:46.405627    1767 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0926 17:14:46.419357    1767 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0926 17:14:46.419371    1767 start.go:495] detecting cgroup driver to use...
	I0926 17:14:46.419474    1767 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 17:14:46.436008    1767 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0926 17:14:46.445728    1767 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0926 17:14:46.454931    1767 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0926 17:14:46.454989    1767 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0926 17:14:46.463786    1767 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 17:14:46.472772    1767 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0926 17:14:46.482020    1767 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 17:14:46.491429    1767 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0926 17:14:46.500746    1767 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0926 17:14:46.509644    1767 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0926 17:14:46.518888    1767 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0926 17:14:46.528250    1767 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0926 17:14:46.536861    1767 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0926 17:14:46.536923    1767 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0926 17:14:46.546061    1767 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0926 17:14:46.554801    1767 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:14:46.661317    1767 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0926 17:14:46.681645    1767 start.go:495] detecting cgroup driver to use...
	I0926 17:14:46.681727    1767 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0926 17:14:46.697852    1767 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 17:14:46.711530    1767 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0926 17:14:46.726328    1767 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 17:14:46.740181    1767 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 17:14:46.753211    1767 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0926 17:14:46.789521    1767 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 17:14:46.802337    1767 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 17:14:46.818594    1767 ssh_runner.go:195] Run: which cri-dockerd
	I0926 17:14:46.821867    1767 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0926 17:14:46.831747    1767 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0926 17:14:46.848634    1767 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0926 17:14:46.963716    1767 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0926 17:14:47.084630    1767 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0926 17:14:47.084705    1767 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0926 17:14:47.098967    1767 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:14:47.201579    1767 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0926 17:14:49.527068    1767 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.325447298s)
	I0926 17:14:49.527133    1767 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0926 17:14:49.537378    1767 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0926 17:14:49.551660    1767 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0926 17:14:49.561850    1767 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0926 17:14:49.656977    1767 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0926 17:14:49.769710    1767 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:14:49.889119    1767 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0926 17:14:49.903041    1767 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0926 17:14:49.914353    1767 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:14:50.026483    1767 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0926 17:14:50.084321    1767 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0926 17:14:50.085467    1767 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0926 17:14:50.090049    1767 start.go:563] Will wait 60s for crictl version
	I0926 17:14:50.090100    1767 ssh_runner.go:195] Run: which crictl
	I0926 17:14:50.093123    1767 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0926 17:14:50.118916    1767 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I0926 17:14:50.119005    1767 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0926 17:14:50.140113    1767 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0926 17:14:50.182201    1767 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I0926 17:14:50.182230    1767 main.go:141] libmachine: (addons-433000) Calling .GetIP
	I0926 17:14:50.183000    1767 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0926 17:14:50.186643    1767 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 17:14:50.196158    1767 kubeadm.go:883] updating cluster {Name:addons-433000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:addons-433000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0926 17:14:50.196225    1767 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 17:14:50.196297    1767 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0926 17:14:50.208476    1767 docker.go:685] Got preloaded images: 
	I0926 17:14:50.208489    1767 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I0926 17:14:50.208541    1767 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0926 17:14:50.216210    1767 ssh_runner.go:195] Run: which lz4
	I0926 17:14:50.219202    1767 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0926 17:14:50.222298    1767 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0926 17:14:50.222316    1767 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342028912 bytes)
	I0926 17:14:51.167011    1767 docker.go:649] duration metric: took 947.850131ms to copy over tarball
	I0926 17:14:51.167236    1767 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0926 17:14:53.578663    1767 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.411383656s)
	I0926 17:14:53.578680    1767 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0926 17:14:53.603994    1767 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0926 17:14:53.612851    1767 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0926 17:14:53.626831    1767 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:14:53.721293    1767 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0926 17:14:56.121786    1767 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.400450902s)
	I0926 17:14:56.121890    1767 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0926 17:14:56.135736    1767 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0926 17:14:56.135757    1767 cache_images.go:84] Images are preloaded, skipping loading
	I0926 17:14:56.135777    1767 kubeadm.go:934] updating node { 192.169.0.2 8443 v1.31.1 docker true true} ...
	I0926 17:14:56.135853    1767 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-433000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-433000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0926 17:14:56.135935    1767 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0926 17:14:56.173320    1767 cni.go:84] Creating CNI manager for ""
	I0926 17:14:56.173337    1767 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 17:14:56.173349    1767 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0926 17:14:56.173364    1767 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-433000 NodeName:addons-433000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0926 17:14:56.173445    1767 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-433000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0926 17:14:56.173516    1767 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0926 17:14:56.181107    1767 binaries.go:44] Found k8s binaries, skipping transfer
	I0926 17:14:56.181160    1767 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0926 17:14:56.188316    1767 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0926 17:14:56.201905    1767 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0926 17:14:56.215079    1767 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2152 bytes)
	I0926 17:14:56.228855    1767 ssh_runner.go:195] Run: grep 192.169.0.2	control-plane.minikube.internal$ /etc/hosts
	I0926 17:14:56.231861    1767 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 17:14:56.242200    1767 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:14:56.342696    1767 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 17:14:56.358267    1767 certs.go:68] Setting up /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000 for IP: 192.169.0.2
	I0926 17:14:56.358279    1767 certs.go:194] generating shared ca certs ...
	I0926 17:14:56.358289    1767 certs.go:226] acquiring lock for ca certs: {Name:mk3d4665cb5756ae49ea6f7cd2a58d273aecc507 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:14:56.359976    1767 certs.go:240] generating "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.key
	I0926 17:14:56.484318    1767 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt ...
	I0926 17:14:56.484332    1767 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt: {Name:mk79174592f675cc8be28c00258421d58e660c80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:14:56.484641    1767 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.key ...
	I0926 17:14:56.484649    1767 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.key: {Name:mkab16dcb81b47309059233dd62d909d04928b36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:14:56.484859    1767 certs.go:240] generating "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.key
	I0926 17:14:56.547933    1767 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.crt ...
	I0926 17:14:56.547943    1767 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.crt: {Name:mk2d436b65b48a26210844e6ec25c137f6ac4ca7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:14:56.548250    1767 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.key ...
	I0926 17:14:56.548258    1767 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.key: {Name:mk3dd35cdfc87bf3298d9addeade96a18562b865 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:14:56.548503    1767 certs.go:256] generating profile certs ...
	I0926 17:14:56.548571    1767 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/client.key
	I0926 17:14:56.548586    1767 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/client.crt with IP's: []
	I0926 17:14:56.639770    1767 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/client.crt ...
	I0926 17:14:56.639786    1767 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/client.crt: {Name:mk1ca10ce26e3878b0ca925283811d24fb2d0b00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:14:56.640313    1767 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/client.key ...
	I0926 17:14:56.640322    1767 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/client.key: {Name:mk2356b2ad9100fee61661866764ee25043e02b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:14:56.640577    1767 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/apiserver.key.7ec15400
	I0926 17:14:56.640603    1767 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/apiserver.crt.7ec15400 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.2]
	I0926 17:14:56.873106    1767 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/apiserver.crt.7ec15400 ...
	I0926 17:14:56.873120    1767 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/apiserver.crt.7ec15400: {Name:mkbfcfbca093a69a688581075a27aaba8255591c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:14:56.873414    1767 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/apiserver.key.7ec15400 ...
	I0926 17:14:56.873429    1767 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/apiserver.key.7ec15400: {Name:mk649080a8bf029b0d72175fa1792b6f264bce4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:14:56.873624    1767 certs.go:381] copying /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/apiserver.crt.7ec15400 -> /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/apiserver.crt
	I0926 17:14:56.873810    1767 certs.go:385] copying /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/apiserver.key.7ec15400 -> /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/apiserver.key
	I0926 17:14:56.873971    1767 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/proxy-client.key
	I0926 17:14:56.873989    1767 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/proxy-client.crt with IP's: []
	I0926 17:14:57.172190    1767 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/proxy-client.crt ...
	I0926 17:14:57.172208    1767 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/proxy-client.crt: {Name:mkcf9b4f331900be777943a155a5ee43fbd74230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:14:57.172526    1767 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/proxy-client.key ...
	I0926 17:14:57.172535    1767 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/proxy-client.key: {Name:mk45f8a16384deee756673a66f272552e1d831c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:14:57.173012    1767 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem (1675 bytes)
	I0926 17:14:57.173065    1767 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem (1082 bytes)
	I0926 17:14:57.173108    1767 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem (1123 bytes)
	I0926 17:14:57.173149    1767 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem (1675 bytes)
	I0926 17:14:57.173671    1767 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0926 17:14:57.197266    1767 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0926 17:14:57.221056    1767 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0926 17:14:57.241976    1767 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0926 17:14:57.261745    1767 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0926 17:14:57.281155    1767 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0926 17:14:57.301691    1767 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0926 17:14:57.321404    1767 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0926 17:14:57.342149    1767 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0926 17:14:57.361945    1767 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0926 17:14:57.377358    1767 ssh_runner.go:195] Run: openssl version
	I0926 17:14:57.381832    1767 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0926 17:14:57.390717    1767 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:14:57.394223    1767 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:14 /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:14:57.394269    1767 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:14:57.398565    1767 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0926 17:14:57.407393    1767 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0926 17:14:57.410581    1767 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0926 17:14:57.410624    1767 kubeadm.go:392] StartCluster: {Name:addons-433000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:addons-433000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 17:14:57.410727    1767 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0926 17:14:57.426492    1767 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0926 17:14:57.434355    1767 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0926 17:14:57.441893    1767 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0926 17:14:57.449448    1767 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0926 17:14:57.449458    1767 kubeadm.go:157] found existing configuration files:
	
	I0926 17:14:57.449504    1767 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0926 17:14:57.456683    1767 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0926 17:14:57.456731    1767 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0926 17:14:57.464797    1767 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0926 17:14:57.471976    1767 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0926 17:14:57.472026    1767 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0926 17:14:57.480339    1767 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0926 17:14:57.487662    1767 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0926 17:14:57.487710    1767 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0926 17:14:57.495198    1767 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0926 17:14:57.502494    1767 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0926 17:14:57.502539    1767 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0926 17:14:57.509886    1767 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0926 17:14:57.544377    1767 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0926 17:14:57.544750    1767 kubeadm.go:310] [preflight] Running pre-flight checks
	I0926 17:14:57.616313    1767 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0926 17:14:57.616406    1767 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0926 17:14:57.616477    1767 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0926 17:14:57.624208    1767 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0926 17:14:57.670663    1767 out.go:235]   - Generating certificates and keys ...
	I0926 17:14:57.670744    1767 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0926 17:14:57.670816    1767 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0926 17:14:57.733204    1767 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0926 17:14:58.062901    1767 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0926 17:14:58.184975    1767 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0926 17:14:58.297063    1767 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0926 17:14:58.787943    1767 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0926 17:14:58.788072    1767 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-433000 localhost] and IPs [192.169.0.2 127.0.0.1 ::1]
	I0926 17:14:59.132851    1767 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0926 17:14:59.132975    1767 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-433000 localhost] and IPs [192.169.0.2 127.0.0.1 ::1]
	I0926 17:14:59.222843    1767 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0926 17:14:59.391616    1767 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0926 17:14:59.472052    1767 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0926 17:14:59.472192    1767 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0926 17:14:59.761426    1767 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0926 17:14:59.879715    1767 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0926 17:15:00.463351    1767 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0926 17:15:00.903351    1767 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0926 17:15:01.070195    1767 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0926 17:15:01.070670    1767 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0926 17:15:01.072290    1767 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0926 17:15:01.114532    1767 out.go:235]   - Booting up control plane ...
	I0926 17:15:01.114635    1767 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0926 17:15:01.114705    1767 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0926 17:15:01.114804    1767 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0926 17:15:01.114914    1767 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0926 17:15:01.114985    1767 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0926 17:15:01.115020    1767 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0926 17:15:01.192517    1767 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0926 17:15:01.192624    1767 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0926 17:15:02.192855    1767 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.000968525s
	I0926 17:15:02.192926    1767 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0926 17:15:06.194438    1767 kubeadm.go:310] [api-check] The API server is healthy after 4.002657288s
	I0926 17:15:06.210604    1767 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0926 17:15:06.219230    1767 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0926 17:15:06.236123    1767 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0926 17:15:06.236287    1767 kubeadm.go:310] [mark-control-plane] Marking the node addons-433000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0926 17:15:06.256998    1767 kubeadm.go:310] [bootstrap-token] Using token: u2jyfc.mev8wim0o14iy6uo
	I0926 17:15:06.283718    1767 out.go:235]   - Configuring RBAC rules ...
	I0926 17:15:06.283901    1767 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0926 17:15:06.315448    1767 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0926 17:15:06.320961    1767 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0926 17:15:06.325733    1767 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0926 17:15:06.327697    1767 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0926 17:15:06.329899    1767 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0926 17:15:06.603956    1767 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0926 17:15:07.013556    1767 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0926 17:15:07.603662    1767 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0926 17:15:07.605276    1767 kubeadm.go:310] 
	I0926 17:15:07.605336    1767 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0926 17:15:07.605349    1767 kubeadm.go:310] 
	I0926 17:15:07.605426    1767 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0926 17:15:07.605435    1767 kubeadm.go:310] 
	I0926 17:15:07.605454    1767 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0926 17:15:07.605499    1767 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0926 17:15:07.605538    1767 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0926 17:15:07.605543    1767 kubeadm.go:310] 
	I0926 17:15:07.605584    1767 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0926 17:15:07.605588    1767 kubeadm.go:310] 
	I0926 17:15:07.605622    1767 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0926 17:15:07.605626    1767 kubeadm.go:310] 
	I0926 17:15:07.605663    1767 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0926 17:15:07.605715    1767 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0926 17:15:07.605778    1767 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0926 17:15:07.605787    1767 kubeadm.go:310] 
	I0926 17:15:07.605849    1767 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0926 17:15:07.605917    1767 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0926 17:15:07.605926    1767 kubeadm.go:310] 
	I0926 17:15:07.606021    1767 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token u2jyfc.mev8wim0o14iy6uo \
	I0926 17:15:07.606109    1767 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cde4a9f1d02b17efd71ed93d12958f8464596b5619fa355326db09dcf9a7790d \
	I0926 17:15:07.606133    1767 kubeadm.go:310] 	--control-plane 
	I0926 17:15:07.606141    1767 kubeadm.go:310] 
	I0926 17:15:07.606204    1767 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0926 17:15:07.606209    1767 kubeadm.go:310] 
	I0926 17:15:07.606277    1767 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token u2jyfc.mev8wim0o14iy6uo \
	I0926 17:15:07.606370    1767 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cde4a9f1d02b17efd71ed93d12958f8464596b5619fa355326db09dcf9a7790d 
	I0926 17:15:07.607365    1767 kubeadm.go:310] W0927 00:14:57.636221    1542 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0926 17:15:07.607583    1767 kubeadm.go:310] W0927 00:14:57.636745    1542 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0926 17:15:07.607669    1767 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0926 17:15:07.607683    1767 cni.go:84] Creating CNI manager for ""
	I0926 17:15:07.607695    1767 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 17:15:07.632336    1767 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0926 17:15:07.676899    1767 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0926 17:15:07.686545    1767 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0926 17:15:07.701123    1767 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0926 17:15:07.701179    1767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 17:15:07.701195    1767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-433000 minikube.k8s.io/updated_at=2024_09_26T17_15_07_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625 minikube.k8s.io/name=addons-433000 minikube.k8s.io/primary=true
	I0926 17:15:07.795406    1767 ops.go:34] apiserver oom_adj: -16
	I0926 17:15:07.795599    1767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 17:15:08.295710    1767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 17:15:08.795675    1767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 17:15:09.295715    1767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 17:15:09.795738    1767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 17:15:10.296717    1767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 17:15:10.795794    1767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 17:15:11.295703    1767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 17:15:11.795808    1767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 17:15:12.296981    1767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 17:15:12.373951    1767 kubeadm.go:1113] duration metric: took 4.67278336s to wait for elevateKubeSystemPrivileges
	I0926 17:15:12.373968    1767 kubeadm.go:394] duration metric: took 14.963205817s to StartCluster
	I0926 17:15:12.373988    1767 settings.go:142] acquiring lock: {Name:mka8948d0f70add5c5f20f2eca7124a97a496c1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:15:12.375139    1767 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19711-1128/kubeconfig
	I0926 17:15:12.375371    1767 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1128/kubeconfig: {Name:mkb9c069c578c490d8cb734b05a21821e9215482 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:15:12.376285    1767 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0926 17:15:12.376304    1767 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 17:15:12.376334    1767 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0926 17:15:12.376417    1767 addons.go:69] Setting yakd=true in profile "addons-433000"
	I0926 17:15:12.376426    1767 addons.go:69] Setting inspektor-gadget=true in profile "addons-433000"
	I0926 17:15:12.376430    1767 addons.go:69] Setting storage-provisioner=true in profile "addons-433000"
	I0926 17:15:12.376450    1767 addons.go:69] Setting volcano=true in profile "addons-433000"
	I0926 17:15:12.376464    1767 addons.go:69] Setting volumesnapshots=true in profile "addons-433000"
	I0926 17:15:12.376463    1767 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-433000"
	I0926 17:15:12.376486    1767 addons.go:234] Setting addon volumesnapshots=true in "addons-433000"
	I0926 17:15:12.376489    1767 addons.go:69] Setting metrics-server=true in profile "addons-433000"
	I0926 17:15:12.376497    1767 config.go:182] Loaded profile config "addons-433000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:15:12.376501    1767 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-433000"
	I0926 17:15:12.376494    1767 addons.go:69] Setting ingress=true in profile "addons-433000"
	I0926 17:15:12.376521    1767 host.go:66] Checking if "addons-433000" exists ...
	I0926 17:15:12.376518    1767 addons.go:69] Setting registry=true in profile "addons-433000"
	I0926 17:15:12.376530    1767 addons.go:234] Setting addon ingress=true in "addons-433000"
	I0926 17:15:12.376539    1767 addons.go:234] Setting addon registry=true in "addons-433000"
	I0926 17:15:12.376521    1767 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-433000"
	I0926 17:15:12.376568    1767 host.go:66] Checking if "addons-433000" exists ...
	I0926 17:15:12.376574    1767 host.go:66] Checking if "addons-433000" exists ...
	I0926 17:15:12.376455    1767 addons.go:69] Setting cloud-spanner=true in profile "addons-433000"
	I0926 17:15:12.376627    1767 addons.go:234] Setting addon cloud-spanner=true in "addons-433000"
	I0926 17:15:12.376463    1767 addons.go:234] Setting addon storage-provisioner=true in "addons-433000"
	I0926 17:15:12.376650    1767 host.go:66] Checking if "addons-433000" exists ...
	I0926 17:15:12.376448    1767 addons.go:234] Setting addon yakd=true in "addons-433000"
	I0926 17:15:12.376670    1767 host.go:66] Checking if "addons-433000" exists ...
	I0926 17:15:12.376461    1767 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-433000"
	I0926 17:15:12.376692    1767 host.go:66] Checking if "addons-433000" exists ...
	I0926 17:15:12.376466    1767 addons.go:69] Setting default-storageclass=true in profile "addons-433000"
	I0926 17:15:12.376728    1767 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-433000"
	I0926 17:15:12.376448    1767 addons.go:234] Setting addon inspektor-gadget=true in "addons-433000"
	I0926 17:15:12.376754    1767 host.go:66] Checking if "addons-433000" exists ...
	I0926 17:15:12.376765    1767 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-433000"
	I0926 17:15:12.376773    1767 host.go:66] Checking if "addons-433000" exists ...
	I0926 17:15:12.376936    1767 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:15:12.376941    1767 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:15:12.376960    1767 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:15:12.376966    1767 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:15:12.377002    1767 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:15:12.377012    1767 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:15:12.377020    1767 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:15:12.377027    1767 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:15:12.377030    1767 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:15:12.377049    1767 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:15:12.376472    1767 addons.go:69] Setting gcp-auth=true in profile "addons-433000"
	I0926 17:15:12.377071    1767 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:15:12.377103    1767 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:15:12.377109    1767 mustload.go:65] Loading cluster: addons-433000
	I0926 17:15:12.376474    1767 addons.go:234] Setting addon volcano=true in "addons-433000"
	I0926 17:15:12.377156    1767 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:15:12.377159    1767 host.go:66] Checking if "addons-433000" exists ...
	I0926 17:15:12.377195    1767 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:15:12.376479    1767 addons.go:69] Setting ingress-dns=true in profile "addons-433000"
	I0926 17:15:12.377278    1767 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:15:12.377290    1767 addons.go:234] Setting addon ingress-dns=true in "addons-433000"
	I0926 17:15:12.377197    1767 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:15:12.378109    1767 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:15:12.376500    1767 addons.go:234] Setting addon metrics-server=true in "addons-433000"
	I0926 17:15:12.378640    1767 host.go:66] Checking if "addons-433000" exists ...
	I0926 17:15:12.378824    1767 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:15:12.376514    1767 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-433000"
	I0926 17:15:12.379063    1767 host.go:66] Checking if "addons-433000" exists ...
	I0926 17:15:12.379139    1767 host.go:66] Checking if "addons-433000" exists ...
	I0926 17:15:12.379156    1767 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:15:12.379193    1767 config.go:182] Loaded profile config "addons-433000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:15:12.379355    1767 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:15:12.381084    1767 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:15:12.381525    1767 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:15:12.382550    1767 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:15:12.383351    1767 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:15:12.383398    1767 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:15:12.383400    1767 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:15:12.384122    1767 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:15:12.384251    1767 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:15:12.384391    1767 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:15:12.384521    1767 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:15:12.392127    1767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49672
	I0926 17:15:12.393110    1767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49673
	I0926 17:15:12.397129    1767 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:15:12.397941    1767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49676
	I0926 17:15:12.397991    1767 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:15:12.398234    1767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49677
	I0926 17:15:12.398512    1767 main.go:141] libmachine: Using API Version  1
	I0926 17:15:12.398569    1767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:15:12.401915    1767 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:15:12.402133    1767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49681
	I0926 17:15:12.402714    1767 main.go:141] libmachine: Using API Version  1
	I0926 17:15:12.402851    1767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:15:12.402902    1767 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:15:12.402149    1767 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:15:12.403043    1767 out.go:177] * Verifying Kubernetes components...
	I0926 17:15:12.402167    1767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49680
	I0926 17:15:12.407996    1767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49684
	I0926 17:15:12.408013    1767 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:15:12.408043    1767 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:15:12.408043    1767 main.go:141] libmachine: Using API Version  1
	I0926 17:15:12.408145    1767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49683
	I0926 17:15:12.409167    1767 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:15:12.409228    1767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:15:12.409179    1767 main.go:141] libmachine: Using API Version  1
	I0926 17:15:12.409570    1767 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:15:12.409604    1767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:15:12.409922    1767 main.go:141] libmachine: Using API Version  1
	I0926 17:15:12.409908    1767 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:15:12.409962    1767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:15:12.410176    1767 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:15:12.410228    1767 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:15:12.414232    1767 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:15:12.415243    1767 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:15:12.415258    1767 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:15:12.415391    1767 main.go:141] libmachine: (addons-433000) Calling .GetState
	I0926 17:15:12.415632    1767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49687
	I0926 17:15:12.415697    1767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49688
	I0926 17:15:12.415748    1767 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:15:12.415929    1767 main.go:141] libmachine: Using API Version  1
	I0926 17:15:12.415978    1767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:15:12.415993    1767 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:15:12.416201    1767 main.go:141] libmachine: (addons-433000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:15:12.416221    1767 main.go:141] libmachine: (addons-433000) DBG | hyperkit pid from json: 1782
	I0926 17:15:12.416501    1767 main.go:141] libmachine: Using API Version  1
	I0926 17:15:12.416516    1767 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:15:12.416537    1767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:15:12.416610    1767 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:15:12.421579    1767 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:15:12.421600    1767 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:15:12.421658    1767 main.go:141] libmachine: Using API Version  1
	I0926 17:15:12.421649    1767 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:15:12.421560    1767 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:15:12.421677    1767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:15:12.421708    1767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49692
	I0926 17:15:12.421731    1767 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:15:12.421581    1767 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:15:12.421785    1767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49693
	I0926 17:15:12.423469    1767 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:15:12.423720    1767 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:15:12.423765    1767 main.go:141] libmachine: Using API Version  1
	I0926 17:15:12.423962    1767 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:15:12.426418    1767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:15:12.426378    1767 main.go:141] libmachine: Using API Version  1
	I0926 17:15:12.426512    1767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49696
	I0926 17:15:12.426885    1767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:15:12.427023    1767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49697
	I0926 17:15:12.429276    1767 main.go:141] libmachine: (addons-433000) Calling .GetState
	I0926 17:15:12.429333    1767 main.go:141] libmachine: Using API Version  1
	I0926 17:15:12.429380    1767 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:15:12.429405    1767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:15:12.429426    1767 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:15:12.430003    1767 main.go:141] libmachine: Using API Version  1
	I0926 17:15:12.430496    1767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:15:12.430747    1767 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:15:12.430989    1767 main.go:141] libmachine: (addons-433000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:15:12.431030    1767 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:15:12.431280    1767 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:15:12.431249    1767 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:15:12.431498    1767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49700
	I0926 17:15:12.431606    1767 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:15:12.431629    1767 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:15:12.431728    1767 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:15:12.431751    1767 main.go:141] libmachine: (addons-433000) DBG | hyperkit pid from json: 1782
	I0926 17:15:12.431726    1767 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:15:12.435889    1767 main.go:141] libmachine: Using API Version  1
	I0926 17:15:12.436055    1767 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:15:12.437638    1767 main.go:141] libmachine: Using API Version  1
	I0926 17:15:12.437937    1767 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:15:12.438035    1767 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:15:12.438099    1767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:15:12.438126    1767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49702
	I0926 17:15:12.438132    1767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49703
	I0926 17:15:12.438107    1767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:15:12.438163    1767 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:15:12.437628    1767 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:15:12.438208    1767 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:15:12.438247    1767 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:15:12.438388    1767 main.go:141] libmachine: Using API Version  1
	I0926 17:15:12.439042    1767 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:15:12.439510    1767 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:15:12.439613    1767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:15:12.439968    1767 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:15:12.439957    1767 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:15:12.440404    1767 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:15:12.440446    1767 main.go:141] libmachine: (addons-433000) Calling .GetState
	I0926 17:15:12.444017    1767 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:15:12.444844    1767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49706
	I0926 17:15:12.445286    1767 host.go:66] Checking if "addons-433000" exists ...
	I0926 17:15:12.446323    1767 main.go:141] libmachine: (addons-433000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:15:12.447561    1767 main.go:141] libmachine: (addons-433000) DBG | hyperkit pid from json: 1782
	I0926 17:15:12.447699    1767 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:15:12.447662    1767 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:15:12.447769    1767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49707
	I0926 17:15:12.447972    1767 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:15:12.448114    1767 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:15:12.448121    1767 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-433000"
	I0926 17:15:12.448045    1767 main.go:141] libmachine: Using API Version  1
	I0926 17:15:12.448200    1767 addons.go:234] Setting addon default-storageclass=true in "addons-433000"
	I0926 17:15:12.448225    1767 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:15:12.448237    1767 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:15:12.448267    1767 host.go:66] Checking if "addons-433000" exists ...
	I0926 17:15:12.448291    1767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:15:12.448359    1767 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:15:12.450930    1767 host.go:66] Checking if "addons-433000" exists ...
	I0926 17:15:12.452838    1767 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:15:12.452845    1767 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:15:12.452931    1767 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:15:12.452928    1767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49710
	I0926 17:15:12.452958    1767 main.go:141] libmachine: Using API Version  1
	I0926 17:15:12.452968    1767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:15:12.453951    1767 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:15:12.454010    1767 main.go:141] libmachine: (addons-433000) Calling .GetState
	I0926 17:15:12.454336    1767 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:15:12.454330    1767 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:15:12.454718    1767 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:15:12.454805    1767 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:15:12.454960    1767 main.go:141] libmachine: Using API Version  1
	I0926 17:15:12.457696    1767 main.go:141] libmachine: Using API Version  1
	I0926 17:15:12.457786    1767 main.go:141] libmachine: (addons-433000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:15:12.457827    1767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:15:12.457940    1767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:15:12.458048    1767 main.go:141] libmachine: (addons-433000) DBG | hyperkit pid from json: 1782
	I0926 17:15:12.458150    1767 main.go:141] libmachine: (addons-433000) Calling .DriverName
	I0926 17:15:12.458438    1767 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:15:12.458479    1767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49712
	I0926 17:15:12.458549    1767 main.go:141] libmachine: (addons-433000) Calling .GetState
	I0926 17:15:12.458754    1767 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:15:12.461502    1767 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:15:12.461883    1767 main.go:141] libmachine: (addons-433000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:15:12.461965    1767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49713
	I0926 17:15:12.462048    1767 main.go:141] libmachine: (addons-433000) DBG | hyperkit pid from json: 1782
	I0926 17:15:12.463657    1767 main.go:141] libmachine: (addons-433000) Calling .DriverName
	I0926 17:15:12.462027    1767 main.go:141] libmachine: (addons-433000) Calling .GetState
	I0926 17:15:12.464548    1767 main.go:141] libmachine: (addons-433000) Calling .GetState
	I0926 17:15:12.464565    1767 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:15:12.464590    1767 main.go:141] libmachine: Using API Version  1
	I0926 17:15:12.464747    1767 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:15:12.472748    1767 main.go:141] libmachine: (addons-433000) Calling .DriverName
	I0926 17:15:12.472749    1767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49718
	I0926 17:15:12.472773    1767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:15:12.464762    1767 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:15:12.465522    1767 main.go:141] libmachine: (addons-433000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:15:12.472970    1767 main.go:141] libmachine: (addons-433000) DBG | hyperkit pid from json: 1782
	I0926 17:15:12.465975    1767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49716
	I0926 17:15:12.473027    1767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49719
	I0926 17:15:12.472871    1767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49717
	I0926 17:15:12.473073    1767 main.go:141] libmachine: (addons-433000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:15:12.473111    1767 main.go:141] libmachine: (addons-433000) DBG | hyperkit pid from json: 1782
	I0926 17:15:12.473369    1767 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:15:12.473547    1767 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:15:12.473744    1767 main.go:141] libmachine: (addons-433000) Calling .GetState
	I0926 17:15:12.473841    1767 main.go:141] libmachine: Using API Version  1
	I0926 17:15:12.473920    1767 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:15:12.503577    1767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:15:12.473944    1767 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:15:12.473955    1767 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:15:12.474456    1767 main.go:141] libmachine: Using API Version  1
	I0926 17:15:12.474676    1767 main.go:141] libmachine: (addons-433000) Calling .DriverName
	I0926 17:15:12.476000    1767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49724
	I0926 17:15:12.478350    1767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49725
	I0926 17:15:12.503811    1767 main.go:141] libmachine: (addons-433000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:15:12.479141    1767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49726
	I0926 17:15:12.480718    1767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49727
	I0926 17:15:12.503925    1767 main.go:141] libmachine: (addons-433000) DBG | hyperkit pid from json: 1782
	I0926 17:15:12.503170    1767 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0926 17:15:12.503999    1767 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:15:12.503676    1767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:15:12.504099    1767 main.go:141] libmachine: Using API Version  1
	I0926 17:15:12.504128    1767 main.go:141] libmachine: Using API Version  1
	I0926 17:15:12.524518    1767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:15:12.504177    1767 main.go:141] libmachine: Using API Version  1
	I0926 17:15:12.524545    1767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:15:12.524549    1767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:15:12.504267    1767 main.go:141] libmachine: Using API Version  1
	I0926 17:15:12.504342    1767 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:15:12.524583    1767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:15:12.504339    1767 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:15:12.504373    1767 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:15:12.504398    1767 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:15:12.505084    1767 main.go:141] libmachine: (addons-433000) Calling .DriverName
	I0926 17:15:12.523962    1767 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0926 17:15:12.524681    1767 main.go:141] libmachine: (addons-433000) Calling .GetState
	I0926 17:15:12.524931    1767 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:15:12.524937    1767 main.go:141] libmachine: (addons-433000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:15:12.524945    1767 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:15:12.524944    1767 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:15:12.524954    1767 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:15:12.524979    1767 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:15:12.525107    1767 main.go:141] libmachine: Using API Version  1
	I0926 17:15:12.525176    1767 main.go:141] libmachine: Using API Version  1
	I0926 17:15:12.582594    1767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:15:12.582648    1767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:15:12.525177    1767 main.go:141] libmachine: Using API Version  1
	I0926 17:15:12.582695    1767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:15:12.525187    1767 main.go:141] libmachine: Using API Version  1
	I0926 17:15:12.582727    1767 main.go:141] libmachine: (addons-433000) Calling .GetState
	I0926 17:15:12.526081    1767 main.go:141] libmachine: (addons-433000) Calling .DriverName
	I0926 17:15:12.582739    1767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:15:12.545487    1767 main.go:141] libmachine: (addons-433000) DBG | hyperkit pid from json: 1782
	I0926 17:15:12.545628    1767 main.go:141] libmachine: (addons-433000) Calling .GetState
	I0926 17:15:12.545127    1767 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I0926 17:15:12.582144    1767 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0926 17:15:12.582155    1767 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0926 17:15:12.603438    1767 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0926 17:15:12.582786    1767 main.go:141] libmachine: (addons-433000) Calling .GetState
	I0926 17:15:12.603467    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHHostname
	I0926 17:15:12.582817    1767 main.go:141] libmachine: (addons-433000) Calling .GetState
	I0926 17:15:12.582821    1767 main.go:141] libmachine: (addons-433000) Calling .GetState
	I0926 17:15:12.582954    1767 main.go:141] libmachine: (addons-433000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:15:12.582974    1767 main.go:141] libmachine: (addons-433000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:15:12.603522    1767 main.go:141] libmachine: (addons-433000) DBG | hyperkit pid from json: 1782
	I0926 17:15:12.603530    1767 main.go:141] libmachine: (addons-433000) DBG | hyperkit pid from json: 1782
	I0926 17:15:12.583038    1767 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:15:12.583088    1767 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:15:12.583115    1767 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:15:12.583147    1767 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:15:12.584178    1767 main.go:141] libmachine: (addons-433000) Calling .DriverName
	I0926 17:15:12.584199    1767 main.go:141] libmachine: (addons-433000) Calling .DriverName
	I0926 17:15:12.603046    1767 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0926 17:15:12.640821    1767 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0926 17:15:12.603664    1767 main.go:141] libmachine: (addons-433000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:15:12.640877    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHHostname
	I0926 17:15:12.603679    1767 main.go:141] libmachine: (addons-433000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:15:12.640923    1767 main.go:141] libmachine: (addons-433000) DBG | hyperkit pid from json: 1782
	I0926 17:15:12.603676    1767 main.go:141] libmachine: (addons-433000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:15:12.603717    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHPort
	I0926 17:15:12.640949    1767 main.go:141] libmachine: (addons-433000) DBG | hyperkit pid from json: 1782
	I0926 17:15:12.640957    1767 main.go:141] libmachine: (addons-433000) Calling .GetState
	I0926 17:15:12.603760    1767 main.go:141] libmachine: (addons-433000) Calling .DriverName
	I0926 17:15:12.604107    1767 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:15:12.604827    1767 main.go:141] libmachine: (addons-433000) Calling .DriverName
	I0926 17:15:12.641039    1767 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:15:12.604835    1767 main.go:141] libmachine: (addons-433000) Calling .DriverName
	I0926 17:15:12.604844    1767 main.go:141] libmachine: (addons-433000) Calling .DriverName
	I0926 17:15:12.640218    1767 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0926 17:15:12.640904    1767 main.go:141] libmachine: (addons-433000) DBG | hyperkit pid from json: 1782
	I0926 17:15:12.641133    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHPort
	I0926 17:15:12.641169    1767 main.go:141] libmachine: (addons-433000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:15:12.641231    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHKeyPath
	I0926 17:15:12.642115    1767 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:15:12.642549    1767 main.go:141] libmachine: (addons-433000) Calling .DriverName
	I0926 17:15:12.650203    1767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49733
	I0926 17:15:12.678041    1767 main.go:141] libmachine: (addons-433000) DBG | hyperkit pid from json: 1782
	I0926 17:15:12.678306    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHKeyPath
	I0926 17:15:12.714036    1767 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
	I0926 17:15:12.714524    1767 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:15:12.714670    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHUsername
	I0926 17:15:12.714905    1767 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:15:12.719072    1767 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 17:15:12.719079    1767 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0926 17:15:12.751111    1767 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0926 17:15:12.751688    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHUsername
	I0926 17:15:12.772056    1767 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0926 17:15:12.772545    1767 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/addons-433000/id_rsa Username:docker}
	I0926 17:15:12.780907    1767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49735
	I0926 17:15:12.809093    1767 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0926 17:15:12.809098    1767 out.go:177]   - Using image docker.io/registry:2.8.3
	I0926 17:15:12.809118    1767 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0926 17:15:12.810048    1767 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/addons-433000/id_rsa Username:docker}
	I0926 17:15:12.810496    1767 main.go:141] libmachine: Using API Version  1
	I0926 17:15:12.827646    1767 node_ready.go:35] waiting up to 6m0s for node "addons-433000" to be "Ready" ...
	I0926 17:15:12.830258    1767 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0926 17:15:12.830986    1767 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:15:12.867150    1767 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0926 17:15:12.867161    1767 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 17:15:12.867179    1767 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0926 17:15:12.900937    1767 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0926 17:15:12.903246    1767 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0926 17:15:12.903268    1767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:15:12.940643    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHHostname
	I0926 17:15:12.903323    1767 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0926 17:15:12.903675    1767 main.go:141] libmachine: Using API Version  1
	I0926 17:15:12.961513    1767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:15:12.940877    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHPort
	I0926 17:15:12.940982    1767 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:15:12.942154    1767 node_ready.go:49] node "addons-433000" has status "Ready":"True"
	I0926 17:15:12.961097    1767 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0926 17:15:12.961576    1767 node_ready.go:38] duration metric: took 58.290447ms for node "addons-433000" to be "Ready" ...
	I0926 17:15:12.961594    1767 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0926 17:15:12.961593    1767 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0926 17:15:12.961176    1767 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0926 17:15:12.961608    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHHostname
	I0926 17:15:12.961622    1767 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0926 17:15:12.961202    1767 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0926 17:15:12.961651    1767 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0926 17:15:12.961642    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHHostname
	I0926 17:15:12.961744    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHKeyPath
	I0926 17:15:12.961779    1767 main.go:141] libmachine: (addons-433000) Calling .GetState
	I0926 17:15:12.961880    1767 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:15:12.961885    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHPort
	I0926 17:15:12.961909    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHUsername
	I0926 17:15:12.961933    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHPort
	I0926 17:15:12.961972    1767 main.go:141] libmachine: (addons-433000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:15:12.962063    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHKeyPath
	I0926 17:15:12.962059    1767 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/addons-433000/id_rsa Username:docker}
	I0926 17:15:12.962092    1767 main.go:141] libmachine: (addons-433000) DBG | hyperkit pid from json: 1782
	I0926 17:15:12.962110    1767 main.go:141] libmachine: (addons-433000) Calling .GetState
	I0926 17:15:12.962141    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHKeyPath
	I0926 17:15:12.962244    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHUsername
	I0926 17:15:12.962247    1767 main.go:141] libmachine: (addons-433000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:15:12.962292    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHUsername
	I0926 17:15:12.962362    1767 main.go:141] libmachine: (addons-433000) DBG | hyperkit pid from json: 1782
	I0926 17:15:12.962401    1767 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/addons-433000/id_rsa Username:docker}
	I0926 17:15:12.962419    1767 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/addons-433000/id_rsa Username:docker}
	I0926 17:15:12.963314    1767 main.go:141] libmachine: (addons-433000) Calling .DriverName
	I0926 17:15:12.963368    1767 main.go:141] libmachine: (addons-433000) Calling .DriverName
	I0926 17:15:12.963530    1767 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0926 17:15:12.982228    1767 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0926 17:15:12.981992    1767 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
	I0926 17:15:12.982266    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHHostname
	I0926 17:15:13.040596    1767 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0926 17:15:13.040620    1767 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0926 17:15:13.040614    1767 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0926 17:15:13.041000    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHHostname
	I0926 17:15:12.982087    1767 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0926 17:15:13.041103    1767 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0926 17:15:13.041124    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHHostname
	I0926 17:15:12.982417    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHPort
	I0926 17:15:12.986009    1767 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0926 17:15:13.061386    1767 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0926 17:15:13.006357    1767 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0926 17:15:13.061411    1767 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0926 17:15:13.019164    1767 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0926 17:15:13.061440    1767 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0926 17:15:13.019229    1767 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 17:15:13.061456    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHHostname
	I0926 17:15:13.061465    1767 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0926 17:15:13.040707    1767 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0926 17:15:13.082500    1767 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0926 17:15:13.040756    1767 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0926 17:15:13.040754    1767 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0926 17:15:13.041324    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHPort
	I0926 17:15:13.041331    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHPort
	I0926 17:15:13.061133    1767 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0926 17:15:13.061483    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHHostname
	I0926 17:15:13.061520    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHKeyPath
	I0926 17:15:13.061577    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHPort
	I0926 17:15:13.076566    1767 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0926 17:15:13.119755    1767 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0926 17:15:13.078768    1767 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0926 17:15:13.119776    1767 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0926 17:15:13.177465    1767 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0926 17:15:13.082747    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHKeyPath
	I0926 17:15:13.082771    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHPort
	I0926 17:15:13.082786    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHKeyPath
	I0926 17:15:13.116027    1767 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0926 17:15:13.119882    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHKeyPath
	I0926 17:15:13.119893    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHUsername
	I0926 17:15:13.177638    1767 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0926 17:15:13.177775    1767 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0926 17:15:13.177920    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHKeyPath
	I0926 17:15:13.177920    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHUsername
	I0926 17:15:13.177933    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHUsername
	I0926 17:15:13.196098    1767 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-r5rp7" in "kube-system" namespace to be "Ready" ...
	I0926 17:15:13.197978    1767 out.go:177]   - Using image docker.io/busybox:stable
	I0926 17:15:13.198381    1767 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0926 17:15:13.198529    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHUsername
	I0926 17:15:13.198556    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHUsername
	I0926 17:15:13.198559    1767 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/addons-433000/id_rsa Username:docker}
	I0926 17:15:13.198570    1767 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/addons-433000/id_rsa Username:docker}
	I0926 17:15:13.219052    1767 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0926 17:15:13.219066    1767 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0926 17:15:13.198573    1767 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/addons-433000/id_rsa Username:docker}
	I0926 17:15:13.219085    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHHostname
	I0926 17:15:13.206544    1767 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0926 17:15:13.219117    1767 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0926 17:15:13.219281    1767 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/addons-433000/id_rsa Username:docker}
	I0926 17:15:13.219285    1767 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/addons-433000/id_rsa Username:docker}
	I0926 17:15:13.219411    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHPort
	I0926 17:15:13.219562    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHKeyPath
	I0926 17:15:13.219670    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHUsername
	I0926 17:15:13.219759    1767 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/addons-433000/id_rsa Username:docker}
	I0926 17:15:13.240196    1767 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0926 17:15:13.240196    1767 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
	I0926 17:15:13.240305    1767 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0926 17:15:13.240344    1767 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0926 17:15:13.240365    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHHostname
	I0926 17:15:13.240547    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHPort
	I0926 17:15:13.240651    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHKeyPath
	I0926 17:15:13.240758    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHUsername
	I0926 17:15:13.240868    1767 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/addons-433000/id_rsa Username:docker}
	I0926 17:15:13.276179    1767 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0926 17:15:13.276193    1767 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0926 17:15:13.298970    1767 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0926 17:15:13.298983    1767 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471825 bytes)
	I0926 17:15:13.298996    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHHostname
	I0926 17:15:13.299147    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHPort
	I0926 17:15:13.299242    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHKeyPath
	I0926 17:15:13.299328    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHUsername
	I0926 17:15:13.299409    1767 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/addons-433000/id_rsa Username:docker}
	I0926 17:15:13.307572    1767 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0926 17:15:13.307582    1767 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0926 17:15:13.335187    1767 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0926 17:15:13.346820    1767 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0926 17:15:13.392955    1767 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0926 17:15:13.414221    1767 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0926 17:15:13.414237    1767 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0926 17:15:13.442695    1767 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0926 17:15:13.450922    1767 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0926 17:15:13.509204    1767 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0926 17:15:13.550986    1767 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0926 17:15:13.572780    1767 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0926 17:15:13.572792    1767 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0926 17:15:13.575089    1767 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 17:15:13.576820    1767 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0926 17:15:13.588043    1767 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0926 17:15:13.588153    1767 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0926 17:15:13.588163    1767 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0926 17:15:13.588194    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHHostname
	I0926 17:15:13.588410    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHPort
	I0926 17:15:13.588526    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHKeyPath
	I0926 17:15:13.588634    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHUsername
	I0926 17:15:13.588722    1767 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/addons-433000/id_rsa Username:docker}
	I0926 17:15:13.702792    1767 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0926 17:15:13.725540    1767 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0926 17:15:13.752958    1767 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0926 17:15:13.756658    1767 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0926 17:15:13.756669    1767 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0926 17:15:13.757756    1767 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0926 17:15:13.757766    1767 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0926 17:15:13.831409    1767 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.021690496s)
	I0926 17:15:13.831430    1767 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0926 17:15:13.957253    1767 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0926 17:15:13.957266    1767 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0926 17:15:14.280807    1767 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0926 17:15:14.280820    1767 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0926 17:15:14.338696    1767 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-433000" context rescaled to 1 replicas
	I0926 17:15:14.364847    1767 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0926 17:15:14.364860    1767 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0926 17:15:14.563932    1767 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0926 17:15:14.597735    1767 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0926 17:15:14.597751    1767 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0926 17:15:14.739766    1767 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0926 17:15:14.739778    1767 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0926 17:15:14.749877    1767 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0926 17:15:14.749895    1767 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0926 17:15:14.881998    1767 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0926 17:15:14.882018    1767 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0926 17:15:14.988667    1767 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0926 17:15:15.030479    1767 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0926 17:15:15.155606    1767 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0926 17:15:15.155619    1767 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0926 17:15:15.221195    1767 pod_ready.go:103] pod "coredns-7c65d6cfc9-r5rp7" in "kube-system" namespace has status "Ready":"False"
	I0926 17:15:15.418222    1767 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0926 17:15:15.418241    1767 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0926 17:15:15.560546    1767 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.47795582s)
	I0926 17:15:15.560577    1767 main.go:141] libmachine: Making call to close driver server
	I0926 17:15:15.560583    1767 main.go:141] libmachine: (addons-433000) Calling .Close
	I0926 17:15:15.560755    1767 main.go:141] libmachine: (addons-433000) DBG | Closing plugin on server side
	I0926 17:15:15.560792    1767 main.go:141] libmachine: Successfully made call to close driver server
	I0926 17:15:15.560801    1767 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 17:15:15.560813    1767 main.go:141] libmachine: Making call to close driver server
	I0926 17:15:15.560820    1767 main.go:141] libmachine: (addons-433000) Calling .Close
	I0926 17:15:15.560944    1767 main.go:141] libmachine: (addons-433000) DBG | Closing plugin on server side
	I0926 17:15:15.560976    1767 main.go:141] libmachine: Successfully made call to close driver server
	I0926 17:15:15.560985    1767 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 17:15:15.563447    1767 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.480880299s)
	I0926 17:15:15.563465    1767 main.go:141] libmachine: Making call to close driver server
	I0926 17:15:15.563470    1767 main.go:141] libmachine: (addons-433000) Calling .Close
	I0926 17:15:15.563635    1767 main.go:141] libmachine: Successfully made call to close driver server
	I0926 17:15:15.563640    1767 main.go:141] libmachine: (addons-433000) DBG | Closing plugin on server side
	I0926 17:15:15.563645    1767 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 17:15:15.563655    1767 main.go:141] libmachine: Making call to close driver server
	I0926 17:15:15.563661    1767 main.go:141] libmachine: (addons-433000) Calling .Close
	I0926 17:15:15.563783    1767 main.go:141] libmachine: Successfully made call to close driver server
	I0926 17:15:15.563793    1767 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 17:15:15.563805    1767 main.go:141] libmachine: (addons-433000) DBG | Closing plugin on server side
	I0926 17:15:15.863497    1767 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0926 17:15:15.863512    1767 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0926 17:15:15.959760    1767 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0926 17:15:15.959773    1767 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0926 17:15:16.189170    1767 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0926 17:15:16.189185    1767 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0926 17:15:16.493196    1767 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0926 17:15:16.493208    1767 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0926 17:15:16.624261    1767 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0926 17:15:16.624274    1767 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0926 17:15:16.758519    1767 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0926 17:15:16.758534    1767 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0926 17:15:16.861171    1767 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.514297164s)
	I0926 17:15:16.861205    1767 main.go:141] libmachine: Making call to close driver server
	I0926 17:15:16.861213    1767 main.go:141] libmachine: (addons-433000) Calling .Close
	I0926 17:15:16.861406    1767 main.go:141] libmachine: Successfully made call to close driver server
	I0926 17:15:16.861415    1767 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 17:15:16.861413    1767 main.go:141] libmachine: (addons-433000) DBG | Closing plugin on server side
	I0926 17:15:16.861425    1767 main.go:141] libmachine: Making call to close driver server
	I0926 17:15:16.861437    1767 main.go:141] libmachine: (addons-433000) Calling .Close
	I0926 17:15:16.861604    1767 main.go:141] libmachine: (addons-433000) DBG | Closing plugin on server side
	I0926 17:15:16.861606    1767 main.go:141] libmachine: Successfully made call to close driver server
	I0926 17:15:16.861618    1767 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 17:15:16.867805    1767 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0926 17:15:16.888381    1767 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-433000 service yakd-dashboard -n yakd-dashboard
	
	I0926 17:15:17.702902    1767 pod_ready.go:103] pod "coredns-7c65d6cfc9-r5rp7" in "kube-system" namespace has status "Ready":"False"
	I0926 17:15:18.612786    1767 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.170019786s)
	W0926 17:15:18.612822    1767 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0926 17:15:18.612865    1767 retry.go:31] will retry after 280.473904ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0926 17:15:18.612890    1767 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.037738014s)
	I0926 17:15:18.612912    1767 main.go:141] libmachine: Making call to close driver server
	I0926 17:15:18.612920    1767 main.go:141] libmachine: (addons-433000) Calling .Close
	I0926 17:15:18.612932    1767 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.036050976s)
	I0926 17:15:18.612949    1767 main.go:141] libmachine: Making call to close driver server
	I0926 17:15:18.612959    1767 main.go:141] libmachine: (addons-433000) Calling .Close
	I0926 17:15:18.612981    1767 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.02487383s)
	I0926 17:15:18.612995    1767 main.go:141] libmachine: Making call to close driver server
	I0926 17:15:18.613001    1767 main.go:141] libmachine: (addons-433000) Calling .Close
	I0926 17:15:18.613073    1767 main.go:141] libmachine: Successfully made call to close driver server
	I0926 17:15:18.613086    1767 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 17:15:18.613093    1767 main.go:141] libmachine: Making call to close driver server
	I0926 17:15:18.613100    1767 main.go:141] libmachine: (addons-433000) Calling .Close
	I0926 17:15:18.613177    1767 main.go:141] libmachine: (addons-433000) DBG | Closing plugin on server side
	I0926 17:15:18.613204    1767 main.go:141] libmachine: Successfully made call to close driver server
	I0926 17:15:18.613216    1767 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 17:15:18.613215    1767 main.go:141] libmachine: (addons-433000) DBG | Closing plugin on server side
	I0926 17:15:18.613223    1767 main.go:141] libmachine: Making call to close driver server
	I0926 17:15:18.613232    1767 main.go:141] libmachine: (addons-433000) Calling .Close
	I0926 17:15:18.613245    1767 main.go:141] libmachine: Successfully made call to close driver server
	I0926 17:15:18.613255    1767 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 17:15:18.613279    1767 main.go:141] libmachine: Making call to close driver server
	I0926 17:15:18.613286    1767 main.go:141] libmachine: (addons-433000) Calling .Close
	I0926 17:15:18.613348    1767 main.go:141] libmachine: Successfully made call to close driver server
	I0926 17:15:18.613358    1767 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 17:15:18.613437    1767 main.go:141] libmachine: (addons-433000) DBG | Closing plugin on server side
	I0926 17:15:18.613443    1767 main.go:141] libmachine: Successfully made call to close driver server
	I0926 17:15:18.613451    1767 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 17:15:18.613506    1767 main.go:141] libmachine: (addons-433000) DBG | Closing plugin on server side
	I0926 17:15:18.613522    1767 main.go:141] libmachine: Successfully made call to close driver server
	I0926 17:15:18.613539    1767 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 17:15:18.705396    1767 main.go:141] libmachine: Making call to close driver server
	I0926 17:15:18.705410    1767 main.go:141] libmachine: (addons-433000) Calling .Close
	I0926 17:15:18.705589    1767 main.go:141] libmachine: Successfully made call to close driver server
	I0926 17:15:18.705598    1767 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 17:15:18.893729    1767 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0926 17:15:19.648146    1767 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0926 17:15:19.648169    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHHostname
	I0926 17:15:19.648314    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHPort
	I0926 17:15:19.648419    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHKeyPath
	I0926 17:15:19.648534    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHUsername
	I0926 17:15:19.648649    1767 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/addons-433000/id_rsa Username:docker}
	I0926 17:15:20.012581    1767 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0926 17:15:20.057647    1767 addons.go:234] Setting addon gcp-auth=true in "addons-433000"
	I0926 17:15:20.057678    1767 host.go:66] Checking if "addons-433000" exists ...
	I0926 17:15:20.057956    1767 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:15:20.057981    1767 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:15:20.067475    1767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49752
	I0926 17:15:20.067862    1767 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:15:20.068232    1767 main.go:141] libmachine: Using API Version  1
	I0926 17:15:20.068253    1767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:15:20.068550    1767 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:15:20.068982    1767 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:15:20.069012    1767 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:15:20.078578    1767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49754
	I0926 17:15:20.078947    1767 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:15:20.079300    1767 main.go:141] libmachine: Using API Version  1
	I0926 17:15:20.079311    1767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:15:20.079542    1767 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:15:20.079653    1767 main.go:141] libmachine: (addons-433000) Calling .GetState
	I0926 17:15:20.079738    1767 main.go:141] libmachine: (addons-433000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:15:20.079828    1767 main.go:141] libmachine: (addons-433000) DBG | hyperkit pid from json: 1782
	I0926 17:15:20.080833    1767 main.go:141] libmachine: (addons-433000) Calling .DriverName
	I0926 17:15:20.081237    1767 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0926 17:15:20.081250    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHHostname
	I0926 17:15:20.081340    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHPort
	I0926 17:15:20.081414    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHKeyPath
	I0926 17:15:20.081499    1767 main.go:141] libmachine: (addons-433000) Calling .GetSSHUsername
	I0926 17:15:20.081584    1767 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/addons-433000/id_rsa Username:docker}
	I0926 17:15:20.204149    1767 pod_ready.go:103] pod "coredns-7c65d6cfc9-r5rp7" in "kube-system" namespace has status "Ready":"False"
	I0926 17:15:22.240218    1767 pod_ready.go:103] pod "coredns-7c65d6cfc9-r5rp7" in "kube-system" namespace has status "Ready":"False"
	I0926 17:15:22.724349    1767 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.998699888s)
	I0926 17:15:22.724391    1767 main.go:141] libmachine: Making call to close driver server
	I0926 17:15:22.724403    1767 main.go:141] libmachine: (addons-433000) Calling .Close
	I0926 17:15:22.724417    1767 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.971356706s)
	I0926 17:15:22.724443    1767 main.go:141] libmachine: Making call to close driver server
	I0926 17:15:22.724456    1767 main.go:141] libmachine: (addons-433000) Calling .Close
	I0926 17:15:22.724464    1767 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.160432498s)
	I0926 17:15:22.724485    1767 main.go:141] libmachine: Making call to close driver server
	I0926 17:15:22.724498    1767 main.go:141] libmachine: (addons-433000) Calling .Close
	I0926 17:15:22.724542    1767 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.735783005s)
	I0926 17:15:22.724567    1767 main.go:141] libmachine: Making call to close driver server
	I0926 17:15:22.724578    1767 main.go:141] libmachine: (addons-433000) Calling .Close
	I0926 17:15:22.724683    1767 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.694093589s)
	I0926 17:15:22.724707    1767 main.go:141] libmachine: Successfully made call to close driver server
	I0926 17:15:22.724716    1767 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 17:15:22.724723    1767 main.go:141] libmachine: Making call to close driver server
	I0926 17:15:22.724731    1767 main.go:141] libmachine: (addons-433000) Calling .Close
	I0926 17:15:22.724718    1767 main.go:141] libmachine: Making call to close driver server
	I0926 17:15:22.724764    1767 main.go:141] libmachine: (addons-433000) Calling .Close
	I0926 17:15:22.724763    1767 main.go:141] libmachine: (addons-433000) DBG | Closing plugin on server side
	I0926 17:15:22.724771    1767 main.go:141] libmachine: (addons-433000) DBG | Closing plugin on server side
	I0926 17:15:22.724778    1767 main.go:141] libmachine: Successfully made call to close driver server
	I0926 17:15:22.724819    1767 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 17:15:22.724829    1767 main.go:141] libmachine: Successfully made call to close driver server
	I0926 17:15:22.724837    1767 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 17:15:22.724843    1767 main.go:141] libmachine: Making call to close driver server
	I0926 17:15:22.724839    1767 main.go:141] libmachine: Making call to close driver server
	I0926 17:15:22.724861    1767 main.go:141] libmachine: (addons-433000) Calling .Close
	I0926 17:15:22.724870    1767 main.go:141] libmachine: (addons-433000) DBG | Closing plugin on server side
	I0926 17:15:22.724878    1767 main.go:141] libmachine: Successfully made call to close driver server
	I0926 17:15:22.724892    1767 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 17:15:22.724900    1767 main.go:141] libmachine: Making call to close driver server
	I0926 17:15:22.724913    1767 main.go:141] libmachine: (addons-433000) Calling .Close
	I0926 17:15:22.724849    1767 main.go:141] libmachine: (addons-433000) Calling .Close
	I0926 17:15:22.724967    1767 main.go:141] libmachine: Successfully made call to close driver server
	I0926 17:15:22.724980    1767 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 17:15:22.724990    1767 main.go:141] libmachine: Making call to close driver server
	I0926 17:15:22.725001    1767 main.go:141] libmachine: (addons-433000) Calling .Close
	I0926 17:15:22.725084    1767 main.go:141] libmachine: (addons-433000) DBG | Closing plugin on server side
	I0926 17:15:22.725093    1767 main.go:141] libmachine: Successfully made call to close driver server
	I0926 17:15:22.725101    1767 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 17:15:22.725117    1767 main.go:141] libmachine: Successfully made call to close driver server
	I0926 17:15:22.725124    1767 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 17:15:22.725134    1767 addons.go:475] Verifying addon ingress=true in "addons-433000"
	I0926 17:15:22.725385    1767 main.go:141] libmachine: Successfully made call to close driver server
	I0926 17:15:22.725398    1767 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 17:15:22.725409    1767 addons.go:475] Verifying addon registry=true in "addons-433000"
	I0926 17:15:22.725439    1767 main.go:141] libmachine: (addons-433000) DBG | Closing plugin on server side
	I0926 17:15:22.725398    1767 main.go:141] libmachine: Successfully made call to close driver server
	I0926 17:15:22.726254    1767 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 17:15:22.725624    1767 main.go:141] libmachine: (addons-433000) DBG | Closing plugin on server side
	I0926 17:15:22.725631    1767 main.go:141] libmachine: Successfully made call to close driver server
	I0926 17:15:22.726272    1767 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 17:15:22.726263    1767 addons.go:475] Verifying addon metrics-server=true in "addons-433000"
	I0926 17:15:22.727123    1767 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (9.024230581s)
	I0926 17:15:22.727145    1767 main.go:141] libmachine: Making call to close driver server
	I0926 17:15:22.727153    1767 main.go:141] libmachine: (addons-433000) Calling .Close
	I0926 17:15:22.727294    1767 main.go:141] libmachine: Successfully made call to close driver server
	I0926 17:15:22.727304    1767 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 17:15:22.727309    1767 main.go:141] libmachine: Making call to close driver server
	I0926 17:15:22.727313    1767 main.go:141] libmachine: (addons-433000) Calling .Close
	I0926 17:15:22.727323    1767 main.go:141] libmachine: (addons-433000) DBG | Closing plugin on server side
	I0926 17:15:22.727511    1767 main.go:141] libmachine: Successfully made call to close driver server
	I0926 17:15:22.727519    1767 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 17:15:22.752028    1767 out.go:177] * Verifying ingress addon...
	I0926 17:15:22.793081    1767 out.go:177] * Verifying registry addon...
	I0926 17:15:22.852552    1767 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0926 17:15:22.873624    1767 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0926 17:15:22.892083    1767 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0926 17:15:22.892095    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:22.916051    1767 main.go:141] libmachine: Making call to close driver server
	I0926 17:15:22.916064    1767 main.go:141] libmachine: (addons-433000) Calling .Close
	I0926 17:15:22.916214    1767 main.go:141] libmachine: (addons-433000) DBG | Closing plugin on server side
	I0926 17:15:22.916216    1767 main.go:141] libmachine: Successfully made call to close driver server
	I0926 17:15:22.916226    1767 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 17:15:22.995266    1767 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0926 17:15:22.995281    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:23.327000    1767 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.459099508s)
	I0926 17:15:23.327031    1767 main.go:141] libmachine: Making call to close driver server
	I0926 17:15:23.327039    1767 main.go:141] libmachine: (addons-433000) Calling .Close
	I0926 17:15:23.327057    1767 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.43326368s)
	I0926 17:15:23.327074    1767 main.go:141] libmachine: Making call to close driver server
	I0926 17:15:23.327082    1767 main.go:141] libmachine: (addons-433000) Calling .Close
	I0926 17:15:23.327105    1767 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.245826189s)
	I0926 17:15:23.327215    1767 main.go:141] libmachine: (addons-433000) DBG | Closing plugin on server side
	I0926 17:15:23.327237    1767 main.go:141] libmachine: Successfully made call to close driver server
	I0926 17:15:23.327247    1767 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 17:15:23.327252    1767 main.go:141] libmachine: Making call to close driver server
	I0926 17:15:23.327257    1767 main.go:141] libmachine: (addons-433000) Calling .Close
	I0926 17:15:23.327277    1767 main.go:141] libmachine: (addons-433000) DBG | Closing plugin on server side
	I0926 17:15:23.327281    1767 main.go:141] libmachine: Successfully made call to close driver server
	I0926 17:15:23.327292    1767 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 17:15:23.327299    1767 main.go:141] libmachine: Making call to close driver server
	I0926 17:15:23.327306    1767 main.go:141] libmachine: (addons-433000) Calling .Close
	I0926 17:15:23.327402    1767 main.go:141] libmachine: (addons-433000) DBG | Closing plugin on server side
	I0926 17:15:23.327420    1767 main.go:141] libmachine: Successfully made call to close driver server
	I0926 17:15:23.327425    1767 main.go:141] libmachine: Successfully made call to close driver server
	I0926 17:15:23.327429    1767 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 17:15:23.327432    1767 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 17:15:23.327439    1767 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-433000"
	I0926 17:15:23.350367    1767 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0926 17:15:23.407742    1767 out.go:177] * Verifying csi-hostpath-driver addon...
	I0926 17:15:23.465917    1767 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0926 17:15:23.466493    1767 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0926 17:15:23.502717    1767 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0926 17:15:23.502734    1767 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0926 17:15:23.516481    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:23.516690    1767 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0926 17:15:23.516691    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:23.516699    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:23.561851    1767 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0926 17:15:23.561867    1767 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0926 17:15:23.586849    1767 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0926 17:15:23.586861    1767 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0926 17:15:23.619517    1767 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0926 17:15:23.856369    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:23.876515    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:23.970781    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:24.248787    1767 main.go:141] libmachine: Making call to close driver server
	I0926 17:15:24.248803    1767 main.go:141] libmachine: (addons-433000) Calling .Close
	I0926 17:15:24.248960    1767 main.go:141] libmachine: Successfully made call to close driver server
	I0926 17:15:24.248972    1767 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 17:15:24.248979    1767 main.go:141] libmachine: Making call to close driver server
	I0926 17:15:24.248985    1767 main.go:141] libmachine: (addons-433000) Calling .Close
	I0926 17:15:24.249138    1767 main.go:141] libmachine: (addons-433000) DBG | Closing plugin on server side
	I0926 17:15:24.249150    1767 main.go:141] libmachine: Successfully made call to close driver server
	I0926 17:15:24.249167    1767 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 17:15:24.250031    1767 addons.go:475] Verifying addon gcp-auth=true in "addons-433000"
	I0926 17:15:24.289593    1767 out.go:177] * Verifying gcp-auth addon...
	I0926 17:15:24.350247    1767 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0926 17:15:24.352443    1767 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0926 17:15:24.354977    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:24.458771    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:24.469738    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:24.707705    1767 pod_ready.go:103] pod "coredns-7c65d6cfc9-r5rp7" in "kube-system" namespace has status "Ready":"False"
	I0926 17:15:24.856340    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:24.879945    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:24.972799    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:25.204749    1767 pod_ready.go:98] pod "coredns-7c65d6cfc9-r5rp7" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-26 17:15:25 -0700 PDT Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-26 17:15:13 -0700 PDT Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-26 17:15:13 -0700 PDT Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-26 17:15:13 -0700 PDT Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-26 17:15:13 -0700 PDT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.169.0.2 HostIPs:[{IP:192.169.0.2}]
PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-26 17:15:13 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-26 17:15:14 -0700 PDT,FinishedAt:2024-09-26 17:15:25 -0700 PDT,ContainerID:docker://a82ea2e68ee6da92db0575461033ed1dafa730dd33ff515aed94743ba3837a84,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://a82ea2e68ee6da92db0575461033ed1dafa730dd33ff515aed94743ba3837a84 Started:0xc0027c7790 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0027c0490} {Name:kube-api-access-lt85c MountPath:/var/run/secrets/kubernetes.io/serviceaccount R
eadOnly:true RecursiveReadOnly:0xc0027c04a0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0926 17:15:25.204769    1767 pod_ready.go:82] duration metric: took 12.006243816s for pod "coredns-7c65d6cfc9-r5rp7" in "kube-system" namespace to be "Ready" ...
	E0926 17:15:25.204777    1767 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-r5rp7" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-26 17:15:25 -0700 PDT Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-26 17:15:13 -0700 PDT Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-26 17:15:13 -0700 PDT Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-26 17:15:13 -0700 PDT Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-26 17:15:13 -0700 PDT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.169.0
.2 HostIPs:[{IP:192.169.0.2}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-26 17:15:13 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-26 17:15:14 -0700 PDT,FinishedAt:2024-09-26 17:15:25 -0700 PDT,ContainerID:docker://a82ea2e68ee6da92db0575461033ed1dafa730dd33ff515aed94743ba3837a84,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://a82ea2e68ee6da92db0575461033ed1dafa730dd33ff515aed94743ba3837a84 Started:0xc0027c7790 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0027c0490} {Name:kube-api-access-lt85c MountPath:/var/run/secrets/k
ubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc0027c04a0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0926 17:15:25.204785    1767 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sndq2" in "kube-system" namespace to be "Ready" ...
	I0926 17:15:25.355066    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:25.375412    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:25.469890    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:25.855180    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:25.876956    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:25.970366    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:26.355167    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:26.375735    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:26.473874    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:26.854846    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:26.875925    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:26.970241    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:27.209727    1767 pod_ready.go:103] pod "coredns-7c65d6cfc9-sndq2" in "kube-system" namespace has status "Ready":"False"
	I0926 17:15:27.355015    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:27.376649    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:27.470741    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:27.854901    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:27.877705    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:27.969334    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:28.355106    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:28.376401    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:28.470469    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:28.855087    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:28.876296    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:28.970388    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:29.354695    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:29.375623    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:29.469873    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:29.708955    1767 pod_ready.go:103] pod "coredns-7c65d6cfc9-sndq2" in "kube-system" namespace has status "Ready":"False"
	I0926 17:15:29.855088    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:29.875939    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:29.970714    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:30.354688    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:30.378210    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:30.470655    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:30.854601    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:30.875695    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:30.969241    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:31.354918    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:31.377567    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:31.471392    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:31.854569    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:31.876007    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:31.969583    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:32.208743    1767 pod_ready.go:103] pod "coredns-7c65d6cfc9-sndq2" in "kube-system" namespace has status "Ready":"False"
	I0926 17:15:32.354752    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:32.376324    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:32.469303    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:32.855436    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:32.877757    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:32.970815    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:33.355325    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:33.377293    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:33.471919    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:33.855028    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:33.876026    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:33.969317    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:34.210957    1767 pod_ready.go:103] pod "coredns-7c65d6cfc9-sndq2" in "kube-system" namespace has status "Ready":"False"
	I0926 17:15:34.354571    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:34.377491    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:34.469433    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:34.854749    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:34.875683    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:34.969653    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:35.355183    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:35.376614    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:35.470476    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:35.854622    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:35.876409    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:35.971193    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:36.354892    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:36.376195    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:36.470705    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:36.708759    1767 pod_ready.go:103] pod "coredns-7c65d6cfc9-sndq2" in "kube-system" namespace has status "Ready":"False"
	I0926 17:15:36.854917    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:36.875721    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:36.970357    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:37.355354    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:37.376984    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:37.469809    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:37.855583    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:37.876420    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:38.085104    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:38.355330    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:38.376395    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:38.470853    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:38.708856    1767 pod_ready.go:103] pod "coredns-7c65d6cfc9-sndq2" in "kube-system" namespace has status "Ready":"False"
	I0926 17:15:38.855091    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:38.876572    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:38.969737    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:39.355075    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:39.375976    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:39.469987    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:39.854999    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:39.875830    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:39.970677    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:40.355212    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:40.379448    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:40.471085    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:40.712204    1767 pod_ready.go:103] pod "coredns-7c65d6cfc9-sndq2" in "kube-system" namespace has status "Ready":"False"
	I0926 17:15:40.856355    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:40.876612    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:40.972136    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:41.354929    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:41.377726    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:41.469561    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:41.866680    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:41.879627    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:41.971222    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:42.355360    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:42.376897    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:42.469819    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:42.854890    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:42.877169    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:42.970175    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:43.209498    1767 pod_ready.go:103] pod "coredns-7c65d6cfc9-sndq2" in "kube-system" namespace has status "Ready":"False"
	I0926 17:15:43.355365    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:43.376024    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:43.470080    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:43.854733    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:43.876198    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:43.970487    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:44.354051    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:44.375761    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:44.469452    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:44.853699    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:44.874591    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:44.969005    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:45.208310    1767 pod_ready.go:103] pod "coredns-7c65d6cfc9-sndq2" in "kube-system" namespace has status "Ready":"False"
	I0926 17:15:45.351898    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:45.374068    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:45.467954    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:45.850349    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:45.871272    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:45.964382    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:46.348617    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:46.447796    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:46.462895    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:46.847694    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:46.870424    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:46.962163    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:47.345907    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:47.369294    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:47.460567    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:47.698819    1767 pod_ready.go:103] pod "coredns-7c65d6cfc9-sndq2" in "kube-system" namespace has status "Ready":"False"
	I0926 17:15:47.845598    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:47.867760    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:47.962623    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:48.345087    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:48.365126    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:48.460065    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:48.843136    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:48.864729    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:48.957850    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:49.341632    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:49.363309    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:49.455763    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:49.840888    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:49.940556    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:49.957155    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:50.195281    1767 pod_ready.go:103] pod "coredns-7c65d6cfc9-sndq2" in "kube-system" namespace has status "Ready":"False"
	I0926 17:15:50.339488    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:50.361595    1767 kapi.go:107] duration metric: took 27.503171767s to wait for kubernetes.io/minikube-addons=registry ...
	I0926 17:15:50.456416    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:50.839228    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:50.952851    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:51.338525    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:51.452506    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:51.836645    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:51.952376    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:52.336089    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:52.451375    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:52.690876    1767 pod_ready.go:103] pod "coredns-7c65d6cfc9-sndq2" in "kube-system" namespace has status "Ready":"False"
	I0926 17:15:52.834798    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:52.950358    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:53.334703    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:53.448990    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:53.833642    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:53.949009    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:54.332435    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:54.447181    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:54.831815    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:54.946565    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:55.185842    1767 pod_ready.go:93] pod "coredns-7c65d6cfc9-sndq2" in "kube-system" namespace has status "Ready":"True"
	I0926 17:15:55.185854    1767 pod_ready.go:82] duration metric: took 30.004455977s for pod "coredns-7c65d6cfc9-sndq2" in "kube-system" namespace to be "Ready" ...
	I0926 17:15:55.185862    1767 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-433000" in "kube-system" namespace to be "Ready" ...
	I0926 17:15:55.190281    1767 pod_ready.go:93] pod "etcd-addons-433000" in "kube-system" namespace has status "Ready":"True"
	I0926 17:15:55.190293    1767 pod_ready.go:82] duration metric: took 4.432722ms for pod "etcd-addons-433000" in "kube-system" namespace to be "Ready" ...
	I0926 17:15:55.190300    1767 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-433000" in "kube-system" namespace to be "Ready" ...
	I0926 17:15:55.193728    1767 pod_ready.go:93] pod "kube-apiserver-addons-433000" in "kube-system" namespace has status "Ready":"True"
	I0926 17:15:55.193737    1767 pod_ready.go:82] duration metric: took 3.43774ms for pod "kube-apiserver-addons-433000" in "kube-system" namespace to be "Ready" ...
	I0926 17:15:55.193743    1767 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-433000" in "kube-system" namespace to be "Ready" ...
	I0926 17:15:55.196752    1767 pod_ready.go:93] pod "kube-controller-manager-addons-433000" in "kube-system" namespace has status "Ready":"True"
	I0926 17:15:55.196761    1767 pod_ready.go:82] duration metric: took 3.017213ms for pod "kube-controller-manager-addons-433000" in "kube-system" namespace to be "Ready" ...
	I0926 17:15:55.196767    1767 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-97vzh" in "kube-system" namespace to be "Ready" ...
	I0926 17:15:55.199477    1767 pod_ready.go:93] pod "kube-proxy-97vzh" in "kube-system" namespace has status "Ready":"True"
	I0926 17:15:55.199486    1767 pod_ready.go:82] duration metric: took 2.718561ms for pod "kube-proxy-97vzh" in "kube-system" namespace to be "Ready" ...
	I0926 17:15:55.199492    1767 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-433000" in "kube-system" namespace to be "Ready" ...
	I0926 17:15:55.331120    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:55.446090    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:55.584075    1767 pod_ready.go:93] pod "kube-scheduler-addons-433000" in "kube-system" namespace has status "Ready":"True"
	I0926 17:15:55.584086    1767 pod_ready.go:82] duration metric: took 385.133017ms for pod "kube-scheduler-addons-433000" in "kube-system" namespace to be "Ready" ...
	I0926 17:15:55.584092    1767 pod_ready.go:39] duration metric: took 42.646324459s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0926 17:15:55.584112    1767 api_server.go:52] waiting for apiserver process to appear ...
	I0926 17:15:55.585006    1767 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 17:15:55.603554    1767 api_server.go:72] duration metric: took 43.251088454s to wait for apiserver process to appear ...
	I0926 17:15:55.603568    1767 api_server.go:88] waiting for apiserver healthz status ...
	I0926 17:15:55.603586    1767 api_server.go:253] Checking apiserver healthz at https://192.169.0.2:8443/healthz ...
	I0926 17:15:55.606658    1767 api_server.go:279] https://192.169.0.2:8443/healthz returned 200:
	ok
	I0926 17:15:55.607378    1767 api_server.go:141] control plane version: v1.31.1
	I0926 17:15:55.607388    1767 api_server.go:131] duration metric: took 3.821071ms to wait for apiserver health ...
	I0926 17:15:55.607393    1767 system_pods.go:43] waiting for kube-system pods to appear ...
	I0926 17:15:55.788678    1767 system_pods.go:59] 17 kube-system pods found
	I0926 17:15:55.788701    1767 system_pods.go:61] "coredns-7c65d6cfc9-sndq2" [708020e2-ca65-49fb-baa7-85dc9d6344ed] Running
	I0926 17:15:55.788710    1767 system_pods.go:61] "csi-hostpath-attacher-0" [8185db4a-37b2-49af-a5bd-92b58db5e943] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0926 17:15:55.788715    1767 system_pods.go:61] "csi-hostpath-resizer-0" [9d09e9fa-1509-4a57-98f4-ef7a139003dd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0926 17:15:55.788721    1767 system_pods.go:61] "csi-hostpathplugin-b9p44" [bd23f784-bb86-4dde-902a-711b5a0365cf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0926 17:15:55.788725    1767 system_pods.go:61] "etcd-addons-433000" [b0f28dac-1e20-46b5-9dda-4e92e3d38b7c] Running
	I0926 17:15:55.788728    1767 system_pods.go:61] "kube-apiserver-addons-433000" [6cb93722-e121-4855-8213-8cab78dd75d7] Running
	I0926 17:15:55.788731    1767 system_pods.go:61] "kube-controller-manager-addons-433000" [4dfb9b9a-b5cc-4796-8e16-7e349f0c313b] Running
	I0926 17:15:55.788735    1767 system_pods.go:61] "kube-ingress-dns-minikube" [e63a9723-5936-4e0e-a6af-55d19d83a77f] Running
	I0926 17:15:55.788738    1767 system_pods.go:61] "kube-proxy-97vzh" [09f67920-949c-4e6d-8185-a93e2027a620] Running
	I0926 17:15:55.788741    1767 system_pods.go:61] "kube-scheduler-addons-433000" [951124df-4f10-4986-b385-5285778fb7be] Running
	I0926 17:15:55.788748    1767 system_pods.go:61] "metrics-server-84c5f94fbc-lt2hp" [2012f46c-0434-4a4c-bc66-5ff170a57a47] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0926 17:15:55.788752    1767 system_pods.go:61] "nvidia-device-plugin-daemonset-mzlmc" [8d39b932-83e4-436e-9ba6-cf639dfdccfa] Running
	I0926 17:15:55.788755    1767 system_pods.go:61] "registry-66c9cd494c-gdmdl" [b49ae8a8-4cbc-4a75-8913-e8be3cc60c32] Running
	I0926 17:15:55.788758    1767 system_pods.go:61] "registry-proxy-nkz2s" [516b6f7b-4fac-4c3f-b845-0484389422ee] Running
	I0926 17:15:55.788764    1767 system_pods.go:61] "snapshot-controller-56fcc65765-p94ns" [4a75b6c4-0583-4986-85b3-5e28830acfe2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0926 17:15:55.788769    1767 system_pods.go:61] "snapshot-controller-56fcc65765-xwmwb" [84d5d29e-3dbf-4c42-a186-fdf3ac38e3ea] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0926 17:15:55.788772    1767 system_pods.go:61] "storage-provisioner" [294a8670-5b85-4419-a5b9-5327d75dbaf6] Running
	I0926 17:15:55.788776    1767 system_pods.go:74] duration metric: took 181.626885ms to wait for pod list to return data ...
	I0926 17:15:55.788782    1767 default_sa.go:34] waiting for default service account to be created ...
	I0926 17:15:55.830701    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:55.945309    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:55.983284    1767 default_sa.go:45] found service account: "default"
	I0926 17:15:55.983297    1767 default_sa.go:55] duration metric: took 194.785232ms for default service account to be created ...
	I0926 17:15:55.983303    1767 system_pods.go:116] waiting for k8s-apps to be running ...
	I0926 17:15:56.186695    1767 system_pods.go:86] 17 kube-system pods found
	I0926 17:15:56.186712    1767 system_pods.go:89] "coredns-7c65d6cfc9-sndq2" [708020e2-ca65-49fb-baa7-85dc9d6344ed] Running
	I0926 17:15:56.186733    1767 system_pods.go:89] "csi-hostpath-attacher-0" [8185db4a-37b2-49af-a5bd-92b58db5e943] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0926 17:15:56.186741    1767 system_pods.go:89] "csi-hostpath-resizer-0" [9d09e9fa-1509-4a57-98f4-ef7a139003dd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0926 17:15:56.186747    1767 system_pods.go:89] "csi-hostpathplugin-b9p44" [bd23f784-bb86-4dde-902a-711b5a0365cf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0926 17:15:56.186751    1767 system_pods.go:89] "etcd-addons-433000" [b0f28dac-1e20-46b5-9dda-4e92e3d38b7c] Running
	I0926 17:15:56.186755    1767 system_pods.go:89] "kube-apiserver-addons-433000" [6cb93722-e121-4855-8213-8cab78dd75d7] Running
	I0926 17:15:56.186758    1767 system_pods.go:89] "kube-controller-manager-addons-433000" [4dfb9b9a-b5cc-4796-8e16-7e349f0c313b] Running
	I0926 17:15:56.186762    1767 system_pods.go:89] "kube-ingress-dns-minikube" [e63a9723-5936-4e0e-a6af-55d19d83a77f] Running
	I0926 17:15:56.186765    1767 system_pods.go:89] "kube-proxy-97vzh" [09f67920-949c-4e6d-8185-a93e2027a620] Running
	I0926 17:15:56.186768    1767 system_pods.go:89] "kube-scheduler-addons-433000" [951124df-4f10-4986-b385-5285778fb7be] Running
	I0926 17:15:56.186771    1767 system_pods.go:89] "metrics-server-84c5f94fbc-lt2hp" [2012f46c-0434-4a4c-bc66-5ff170a57a47] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0926 17:15:56.186775    1767 system_pods.go:89] "nvidia-device-plugin-daemonset-mzlmc" [8d39b932-83e4-436e-9ba6-cf639dfdccfa] Running
	I0926 17:15:56.186778    1767 system_pods.go:89] "registry-66c9cd494c-gdmdl" [b49ae8a8-4cbc-4a75-8913-e8be3cc60c32] Running
	I0926 17:15:56.186781    1767 system_pods.go:89] "registry-proxy-nkz2s" [516b6f7b-4fac-4c3f-b845-0484389422ee] Running
	I0926 17:15:56.186785    1767 system_pods.go:89] "snapshot-controller-56fcc65765-p94ns" [4a75b6c4-0583-4986-85b3-5e28830acfe2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0926 17:15:56.186790    1767 system_pods.go:89] "snapshot-controller-56fcc65765-xwmwb" [84d5d29e-3dbf-4c42-a186-fdf3ac38e3ea] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0926 17:15:56.186793    1767 system_pods.go:89] "storage-provisioner" [294a8670-5b85-4419-a5b9-5327d75dbaf6] Running
	I0926 17:15:56.186798    1767 system_pods.go:126] duration metric: took 203.76322ms to wait for k8s-apps to be running ...
	I0926 17:15:56.186806    1767 system_svc.go:44] waiting for kubelet service to be running ....
	I0926 17:15:56.186863    1767 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 17:15:56.199680    1767 system_svc.go:56] duration metric: took 12.889136ms WaitForService to wait for kubelet
	I0926 17:15:56.199695    1767 kubeadm.go:582] duration metric: took 43.848056966s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 17:15:56.199709    1767 node_conditions.go:102] verifying NodePressure condition ...
	I0926 17:15:56.330318    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:56.383277    1767 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0926 17:15:56.383292    1767 node_conditions.go:123] node cpu capacity is 2
	I0926 17:15:56.383302    1767 node_conditions.go:105] duration metric: took 183.832609ms to run NodePressure ...
	I0926 17:15:56.383310    1767 start.go:241] waiting for startup goroutines ...
	I0926 17:15:56.444511    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:56.829668    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:56.944220    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:57.328608    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:57.443358    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:57.827860    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:57.944786    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:58.327439    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:58.441850    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:58.826855    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:58.942772    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:59.326156    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:59.442378    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:59.825406    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:59.942122    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:00.325431    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:00.439438    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:00.824657    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:01.009964    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:01.323913    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:01.439512    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:01.824951    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:01.939203    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:02.323932    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:02.439276    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:02.823927    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:02.937891    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:03.322275    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:03.438423    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:03.821749    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:03.937342    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:04.321538    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:04.436489    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:04.821183    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:04.936980    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:05.321095    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:05.435442    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:05.820136    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:05.935736    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:06.320235    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:06.437113    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:06.819635    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:06.934223    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:07.319088    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:07.434619    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:07.818688    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:07.934229    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:08.318562    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:08.434536    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:08.818344    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:08.933519    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:09.319607    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:09.433763    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:09.817679    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:09.932338    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:10.317578    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:10.432089    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:10.817383    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:10.932853    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:11.317689    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:11.432410    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:11.816895    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:11.932955    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:12.316884    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:12.430834    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:12.816608    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:12.930814    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:13.316279    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:13.430820    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:13.816347    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:13.934946    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:14.316154    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:14.434731    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:14.815703    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:14.931822    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:15.314852    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:15.430310    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:15.815139    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:15.930156    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:16.315002    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:16.433242    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:16.814886    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:16.932024    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:17.314568    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:17.431853    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:17.814665    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:17.928771    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:18.314102    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:18.428961    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:18.814001    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:18.928880    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:19.313748    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:19.430396    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:19.813588    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:19.928496    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:20.314122    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:20.430064    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:20.813104    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:20.930105    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:21.313367    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:21.428463    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:21.812927    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:21.927543    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:22.313517    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:22.429383    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:22.812974    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:22.927954    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:23.312675    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:23.429059    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:23.812637    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:23.927163    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:24.312635    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:24.427012    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:24.812098    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:24.929659    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:25.312068    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:25.427714    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:25.812243    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:25.926479    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:26.312417    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:26.427467    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:26.811976    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:26.927007    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:27.311684    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:27.434050    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:27.812333    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:27.928321    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:28.312162    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:28.428530    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:28.812317    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:28.928033    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:29.311468    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:29.426371    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:29.811738    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:29.927382    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:30.311440    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:30.426799    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:30.811569    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:30.925982    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:31.311462    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:31.425852    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:31.811041    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:31.926405    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:32.311355    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:32.426131    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:32.811439    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:32.926063    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:33.311304    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:33.427301    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:33.810935    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:33.926698    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:34.311329    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:34.426472    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:34.811813    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:34.926944    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:35.310546    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:35.425688    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:35.810967    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:35.926950    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:36.310580    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:36.425850    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:36.810657    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:36.926825    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:37.310414    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:37.425198    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:37.810614    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:37.925709    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:38.310293    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:38.425955    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:38.810880    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:38.926831    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:39.310445    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:39.426468    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:39.810543    1767 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:39.931227    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:40.310073    1767 kapi.go:107] duration metric: took 1m17.502027306s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0926 17:16:40.427610    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:40.925198    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:41.426934    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:41.926557    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:42.427242    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:42.926706    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:43.426496    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:43.926859    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:44.426177    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:44.926174    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:45.424684    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:45.925571    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:46.424670    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:46.924973    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:47.426607    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:47.926505    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:48.425692    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:48.924656    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:49.427826    1767 kapi.go:107] duration metric: took 1m26.006394547s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0926 17:18:08.310075    1767 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0926 17:18:08.310088    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 17:18:08.811059    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 17:18:09.309722    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 17:18:09.808549    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 17:18:10.307868    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 17:18:10.809201    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 17:18:11.306576    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 17:18:11.808319    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 17:18:12.307630    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 17:18:12.807656    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 17:18:13.307361    1767 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 17:18:13.809586    1767 kapi.go:107] duration metric: took 2m49.505169888s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0926 17:18:13.840654    1767 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-433000 cluster.
	I0926 17:18:13.860522    1767 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0926 17:18:13.902406    1767 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0926 17:18:13.976526    1767 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, yakd, storage-provisioner, nvidia-device-plugin, storage-provisioner-rancher, inspektor-gadget, metrics-server, volcano, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0926 17:18:14.034548    1767 addons.go:510] duration metric: took 3m1.70393836s for enable addons: enabled=[cloud-spanner ingress-dns yakd storage-provisioner nvidia-device-plugin storage-provisioner-rancher inspektor-gadget metrics-server volcano default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0926 17:18:14.034600    1767 start.go:246] waiting for cluster config update ...
	I0926 17:18:14.034638    1767 start.go:255] writing updated cluster config ...
	I0926 17:18:14.036903    1767 ssh_runner.go:195] Run: rm -f paused
	I0926 17:18:14.084310    1767 start.go:600] kubectl: 1.29.2, cluster: 1.31.1 (minor skew: 2)
	I0926 17:18:14.105569    1767 out.go:201] 
	W0926 17:18:14.126544    1767 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1.
	I0926 17:18:14.147583    1767 out.go:177]   - Want kubectl v1.31.1? Try 'minikube kubectl -- get pods -A'
	I0926 17:18:14.225645    1767 out.go:177] * Done! kubectl is now configured to use "addons-433000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 27 00:27:59 addons-433000 dockerd[1233]: time="2024-09-27T00:27:59.995589371Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 27 00:28:00 addons-433000 dockerd[1227]: time="2024-09-27T00:28:00.083544800Z" level=info msg="ignoring event" container=9da606737659d07fd444a0f46571eaffff2c8464de264f0cc2a760777a5e666e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:28:00 addons-433000 dockerd[1233]: time="2024-09-27T00:28:00.084301483Z" level=info msg="shim disconnected" id=9da606737659d07fd444a0f46571eaffff2c8464de264f0cc2a760777a5e666e namespace=moby
	Sep 27 00:28:00 addons-433000 dockerd[1233]: time="2024-09-27T00:28:00.084425531Z" level=warning msg="cleaning up after shim disconnected" id=9da606737659d07fd444a0f46571eaffff2c8464de264f0cc2a760777a5e666e namespace=moby
	Sep 27 00:28:00 addons-433000 dockerd[1233]: time="2024-09-27T00:28:00.084472207Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 27 00:28:14 addons-433000 dockerd[1227]: time="2024-09-27T00:28:14.176503457Z" level=info msg="ignoring event" container=b2b7ac61693eaafe268d4136169bfe498133233c273733c2b50bf5fd2dcc2dda module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:28:14 addons-433000 dockerd[1233]: time="2024-09-27T00:28:14.177155724Z" level=info msg="shim disconnected" id=b2b7ac61693eaafe268d4136169bfe498133233c273733c2b50bf5fd2dcc2dda namespace=moby
	Sep 27 00:28:14 addons-433000 dockerd[1233]: time="2024-09-27T00:28:14.177372446Z" level=warning msg="cleaning up after shim disconnected" id=b2b7ac61693eaafe268d4136169bfe498133233c273733c2b50bf5fd2dcc2dda namespace=moby
	Sep 27 00:28:14 addons-433000 dockerd[1233]: time="2024-09-27T00:28:14.177431777Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 27 00:28:14 addons-433000 dockerd[1227]: time="2024-09-27T00:28:14.618767171Z" level=info msg="ignoring event" container=63b065ca61806330ae63695f85a698d367c40069acb6178ccca7450c8ec1a9f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:28:14 addons-433000 dockerd[1233]: time="2024-09-27T00:28:14.619494234Z" level=info msg="shim disconnected" id=63b065ca61806330ae63695f85a698d367c40069acb6178ccca7450c8ec1a9f3 namespace=moby
	Sep 27 00:28:14 addons-433000 dockerd[1233]: time="2024-09-27T00:28:14.619567493Z" level=warning msg="cleaning up after shim disconnected" id=63b065ca61806330ae63695f85a698d367c40069acb6178ccca7450c8ec1a9f3 namespace=moby
	Sep 27 00:28:14 addons-433000 dockerd[1233]: time="2024-09-27T00:28:14.619576957Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 27 00:28:14 addons-433000 dockerd[1233]: time="2024-09-27T00:28:14.635284531Z" level=info msg="shim disconnected" id=cf81f3542fdd4d37613634a53ff0d4c92f57f22196c7237c21a017cedd149c59 namespace=moby
	Sep 27 00:28:14 addons-433000 dockerd[1233]: time="2024-09-27T00:28:14.635509239Z" level=warning msg="cleaning up after shim disconnected" id=cf81f3542fdd4d37613634a53ff0d4c92f57f22196c7237c21a017cedd149c59 namespace=moby
	Sep 27 00:28:14 addons-433000 dockerd[1233]: time="2024-09-27T00:28:14.635554565Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 27 00:28:14 addons-433000 dockerd[1227]: time="2024-09-27T00:28:14.635880219Z" level=info msg="ignoring event" container=cf81f3542fdd4d37613634a53ff0d4c92f57f22196c7237c21a017cedd149c59 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:28:14 addons-433000 dockerd[1227]: time="2024-09-27T00:28:14.816871489Z" level=info msg="ignoring event" container=026d09d5d4a72fa425e600f0f9c5407282dde91b3468e15711176b05d4aa3b72 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:28:14 addons-433000 dockerd[1233]: time="2024-09-27T00:28:14.817478369Z" level=info msg="shim disconnected" id=026d09d5d4a72fa425e600f0f9c5407282dde91b3468e15711176b05d4aa3b72 namespace=moby
	Sep 27 00:28:14 addons-433000 dockerd[1233]: time="2024-09-27T00:28:14.817695927Z" level=warning msg="cleaning up after shim disconnected" id=026d09d5d4a72fa425e600f0f9c5407282dde91b3468e15711176b05d4aa3b72 namespace=moby
	Sep 27 00:28:14 addons-433000 dockerd[1233]: time="2024-09-27T00:28:14.817728322Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 27 00:28:14 addons-433000 dockerd[1233]: time="2024-09-27T00:28:14.874171053Z" level=info msg="shim disconnected" id=fdda1f21b4516710397e1a19c263b22a5708e0f6a6c13640ef13727094aafabf namespace=moby
	Sep 27 00:28:14 addons-433000 dockerd[1233]: time="2024-09-27T00:28:14.874394100Z" level=warning msg="cleaning up after shim disconnected" id=fdda1f21b4516710397e1a19c263b22a5708e0f6a6c13640ef13727094aafabf namespace=moby
	Sep 27 00:28:14 addons-433000 dockerd[1233]: time="2024-09-27T00:28:14.874444826Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 27 00:28:14 addons-433000 dockerd[1227]: time="2024-09-27T00:28:14.874791735Z" level=info msg="ignoring event" container=fdda1f21b4516710397e1a19c263b22a5708e0f6a6c13640ef13727094aafabf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	d964cdb23bff3       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec                            33 seconds ago      Exited              gadget                                   7                   a239e1dcbeea3       gadget-9kgzh
	dc4cd44be9448       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 10 minutes ago      Running             gcp-auth                                 0                   640e66dcddbe3       gcp-auth-89d5ffd79-69lhr
	1d65333dc7fc1       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          11 minutes ago      Running             csi-snapshotter                          0                   dfbaa0364f07d       csi-hostpathplugin-b9p44
	b7af34ef19544       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          11 minutes ago      Running             csi-provisioner                          0                   dfbaa0364f07d       csi-hostpathplugin-b9p44
	dc4679aeecf47       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            11 minutes ago      Running             liveness-probe                           0                   dfbaa0364f07d       csi-hostpathplugin-b9p44
	c45735f23b817       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           11 minutes ago      Running             hostpath                                 0                   dfbaa0364f07d       csi-hostpathplugin-b9p44
	2580b7fd176cc       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce                             11 minutes ago      Running             controller                               0                   0650159fd8753       ingress-nginx-controller-bc57996ff-v5s6z
	6ee4f2c7a5f1f       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                11 minutes ago      Running             node-driver-registrar                    0                   dfbaa0364f07d       csi-hostpathplugin-b9p44
	586c2197b8492       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              11 minutes ago      Running             csi-resizer                              0                   06a5e7c6aa802       csi-hostpath-resizer-0
	cf3bd0b98c15e       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             11 minutes ago      Running             csi-attacher                             0                   b529d680138e7       csi-hostpath-attacher-0
	3d55c4710c6a5       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   11 minutes ago      Running             csi-external-health-monitor-controller   0                   dfbaa0364f07d       csi-hostpathplugin-b9p44
	a4e5c2d48039c       ce263a8653f9c                                                                                                                                12 minutes ago      Exited              patch                                    1                   8e38a7d6cbd6a       ingress-nginx-admission-patch-99n4k
	7108abcd1f69d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3                   12 minutes ago      Exited              create                                   0                   558be6450d11f       ingress-nginx-admission-create-zqknl
	0b410d6cbd94e       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9                        12 minutes ago      Running             metrics-server                           0                   fc3c814dc3295       metrics-server-84c5f94fbc-lt2hp
	9e064a21de41d       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      12 minutes ago      Running             volume-snapshot-controller               0                   d319e057cd13c       snapshot-controller-56fcc65765-p94ns
	30f2a4f241348       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      12 minutes ago      Running             volume-snapshot-controller               0                   96250a3e55f95       snapshot-controller-56fcc65765-xwmwb
	7e93719e73c2f       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c                             12 minutes ago      Running             minikube-ingress-dns                     0                   caf1eab51a695       kube-ingress-dns-minikube
	503a7c2cf2dd8       gcr.io/cloud-spanner-emulator/emulator@sha256:f78b14fe7e4632fc0b3c65e15101ebbbcf242857de9851d3c0baea94bd269b5e                               12 minutes ago      Running             cloud-spanner-emulator                   0                   4e058d04bdc1b       cloud-spanner-emulator-5b584cc74-p8d5s
	698a8a1c9783a       6e38f40d628db                                                                                                                                12 minutes ago      Running             storage-provisioner                      0                   a1797b7d1e887       storage-provisioner
	78bd034bcc397       c69fa2e9cbf5f                                                                                                                                13 minutes ago      Running             coredns                                  0                   76920fe456e45       coredns-7c65d6cfc9-sndq2
	af08ebacd51a4       60c005f310ff3                                                                                                                                13 minutes ago      Running             kube-proxy                               0                   5539b2892974b       kube-proxy-97vzh
	c7c4cd6fafde7       9aa1fad941575                                                                                                                                13 minutes ago      Running             kube-scheduler                           0                   d776abd96b250       kube-scheduler-addons-433000
	f9988163f4efa       6bab7719df100                                                                                                                                13 minutes ago      Running             kube-apiserver                           0                   d4681ddde8743       kube-apiserver-addons-433000
	34cc0e9064c8e       2e96e5913fc06                                                                                                                                13 minutes ago      Running             etcd                                     0                   989c7ff2aaff4       etcd-addons-433000
	ea086a0c0ec8b       175ffd71cce3d                                                                                                                                13 minutes ago      Running             kube-controller-manager                  0                   cf61eb51df439       kube-controller-manager-addons-433000
	
	
	==> controller_ingress [2580b7fd176c] <==
	W0927 00:16:40.045578       8 client_config.go:659] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0927 00:16:40.045773       8 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0927 00:16:40.050628       8 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.1" state="clean" commit="948afe5ca072329a73c8e79ed5938717a5cb3d21" platform="linux/amd64"
	I0927 00:16:40.260200       8 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0927 00:16:40.288502       8 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0927 00:16:40.296763       8 nginx.go:271] "Starting NGINX Ingress controller"
	I0927 00:16:40.307706       8 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"3f3be844-8c54-48d2-98b2-fc79a8698f86", APIVersion:"v1", ResourceVersion:"658", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0927 00:16:40.311852       8 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"99438b63-1940-417c-9d94-71c0f6010b19", APIVersion:"v1", ResourceVersion:"659", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0927 00:16:40.312183       8 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"b9f8d571-58b6-4121-a9ef-fabe867e9107", APIVersion:"v1", ResourceVersion:"660", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0927 00:16:41.499850       8 nginx.go:317] "Starting NGINX process"
	I0927 00:16:41.500128       8 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0927 00:16:41.500228       8 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0927 00:16:41.500627       8 controller.go:193] "Configuration changes detected, backend reload required"
	I0927 00:16:41.517355       8 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0927 00:16:41.518423       8 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-v5s6z"
	I0927 00:16:41.565469       8 controller.go:213] "Backend successfully reloaded"
	I0927 00:16:41.565514       8 controller.go:224] "Initial sync, sleeping for 1 second"
	I0927 00:16:41.565719       8 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-v5s6z", UID:"d0385f9e-7c39-45f0-85f6-6aeb3ba18ce0", APIVersion:"v1", ResourceVersion:"1221", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	I0927 00:16:41.615096       8 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-v5s6z" node="addons-433000"
	  Build:         46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [78bd034bcc39] <==
	[INFO] Reloading complete
	[INFO] 127.0.0.1:34484 - 18760 "HINFO IN 3768261224243576907.1173951561672805530. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.011104012s
	[INFO] 10.244.0.8:35274 - 49452 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 97 false 1232" NXDOMAIN qr,aa,rd 179 0.00021795s
	[INFO] 10.244.0.8:35274 - 33191 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00012293s
	[INFO] 10.244.0.8:35274 - 43808 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000142623s
	[INFO] 10.244.0.8:35274 - 62233 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000111485s
	[INFO] 10.244.0.8:35274 - 2966 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000081699s
	[INFO] 10.244.0.8:35274 - 62114 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000187706s
	[INFO] 10.244.0.8:35274 - 55856 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00010761s
	[INFO] 10.244.0.8:58984 - 50173 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000104181s
	[INFO] 10.244.0.8:58984 - 50401 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000177518s
	[INFO] 10.244.0.8:39492 - 47973 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000081686s
	[INFO] 10.244.0.8:39492 - 48173 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000097178s
	[INFO] 10.244.0.8:36525 - 28716 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000033031s
	[INFO] 10.244.0.8:36525 - 29136 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00017995s
	[INFO] 10.244.0.8:60562 - 58563 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000096249s
	[INFO] 10.244.0.8:60562 - 58725 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000055613s
	[INFO] 10.244.0.25:56166 - 62158 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00049551s
	[INFO] 10.244.0.25:42934 - 12956 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000497531s
	[INFO] 10.244.0.25:33339 - 62605 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000056361s
	[INFO] 10.244.0.25:34065 - 40157 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000081638s
	[INFO] 10.244.0.25:57995 - 1711 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000091452s
	[INFO] 10.244.0.25:53722 - 1139 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000083914s
	[INFO] 10.244.0.25:45115 - 44127 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000541839s
	[INFO] 10.244.0.25:44945 - 50043 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.001937571s
	
	
	==> describe nodes <==
	Name:               addons-433000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-433000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=addons-433000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_26T17_15_07_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-433000
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-433000"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 00:15:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-433000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 00:28:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 00:27:43 +0000   Fri, 27 Sep 2024 00:15:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 00:27:43 +0000   Fri, 27 Sep 2024 00:15:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 00:27:43 +0000   Fri, 27 Sep 2024 00:15:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 00:27:43 +0000   Fri, 27 Sep 2024 00:15:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.2
	  Hostname:    addons-433000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912944Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912944Ki
	  pods:               110
	System Info:
	  Machine ID:                 2f42d38636564c3a92cb7c99549da75f
	  System UUID:                81e74e32-0000-0000-882d-0e8e70da50ed
	  Boot ID:                    bee7d0bb-ad16-42d4-88d6-442ea1194bf7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (19 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m17s
	  default                     cloud-spanner-emulator-5b584cc74-p8d5s      0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  gadget                      gadget-9kgzh                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gcp-auth                    gcp-auth-89d5ffd79-69lhr                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-v5s6z    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-sndq2                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     13m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 csi-hostpathplugin-b9p44                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-addons-433000                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         13m
	  kube-system                 kube-apiserver-addons-433000                250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-addons-433000       200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-97vzh                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-addons-433000                100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 metrics-server-84c5f94fbc-lt2hp             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         12m
	  kube-system                 snapshot-controller-56fcc65765-p94ns        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 snapshot-controller-56fcc65765-xwmwb        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 13m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m   kubelet          Node addons-433000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet          Node addons-433000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet          Node addons-433000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                13m   kubelet          Node addons-433000 status is now: NodeReady
	  Normal  RegisteredNode           13m   node-controller  Node addons-433000 event: Registered Node addons-433000 in Controller
	
	
	==> dmesg <==
	[  +5.021857] kauditd_printk_skb: 122 callbacks suppressed
	[  +5.036067] kauditd_printk_skb: 143 callbacks suppressed
	[ +10.341178] kauditd_printk_skb: 71 callbacks suppressed
	[ +18.958787] kauditd_printk_skb: 4 callbacks suppressed
	[Sep27 00:16] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.808871] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.790363] kauditd_printk_skb: 35 callbacks suppressed
	[  +5.556532] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.416937] kauditd_printk_skb: 32 callbacks suppressed
	[ +12.328964] kauditd_printk_skb: 38 callbacks suppressed
	[  +7.015160] kauditd_printk_skb: 56 callbacks suppressed
	[Sep27 00:17] kauditd_printk_skb: 28 callbacks suppressed
	[ +14.294966] kauditd_printk_skb: 39 callbacks suppressed
	[Sep27 00:18] kauditd_printk_skb: 3 callbacks suppressed
	[  +6.269304] kauditd_printk_skb: 40 callbacks suppressed
	[ +27.931883] kauditd_printk_skb: 2 callbacks suppressed
	[ +18.807048] kauditd_printk_skb: 20 callbacks suppressed
	[Sep27 00:19] kauditd_printk_skb: 2 callbacks suppressed
	[Sep27 00:22] kauditd_printk_skb: 28 callbacks suppressed
	[Sep27 00:27] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.267680] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.446422] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.658252] kauditd_printk_skb: 23 callbacks suppressed
	[  +6.196276] kauditd_printk_skb: 33 callbacks suppressed
	[Sep27 00:28] kauditd_printk_skb: 28 callbacks suppressed
	
	
	==> etcd [34cc0e9064c8] <==
	{"level":"info","ts":"2024-09-27T00:15:22.985900Z","caller":"traceutil/trace.go:171","msg":"trace[1145026768] transaction","detail":"{read_only:false; response_revision:848; number_of_response:1; }","duration":"127.406951ms","start":"2024-09-27T00:15:22.858468Z","end":"2024-09-27T00:15:22.985875Z","steps":["trace[1145026768] 'process raft request'  (duration: 126.406247ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T00:15:23.577040Z","caller":"traceutil/trace.go:171","msg":"trace[350584525] transaction","detail":"{read_only:false; response_revision:891; number_of_response:1; }","duration":"138.280418ms","start":"2024-09-27T00:15:23.438748Z","end":"2024-09-27T00:15:23.577029Z","steps":["trace[350584525] 'process raft request'  (duration: 138.211783ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T00:15:23.613997Z","caller":"traceutil/trace.go:171","msg":"trace[1180283735] linearizableReadLoop","detail":"{readStateIndex:908; appliedIndex:907; }","duration":"170.579021ms","start":"2024-09-27T00:15:23.443408Z","end":"2024-09-27T00:15:23.613987Z","steps":["trace[1180283735] 'read index received'  (duration: 133.907246ms)","trace[1180283735] 'applied index is now lower than readState.Index'  (duration: 36.671419ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-27T00:15:23.614060Z","caller":"traceutil/trace.go:171","msg":"trace[1982625185] transaction","detail":"{read_only:false; response_revision:892; number_of_response:1; }","duration":"170.892989ms","start":"2024-09-27T00:15:23.443162Z","end":"2024-09-27T00:15:23.614055Z","steps":["trace[1982625185] 'process raft request'  (duration: 170.728628ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T00:15:23.614217Z","caller":"traceutil/trace.go:171","msg":"trace[515068569] transaction","detail":"{read_only:false; response_revision:893; number_of_response:1; }","duration":"167.632389ms","start":"2024-09-27T00:15:23.446580Z","end":"2024-09-27T00:15:23.614213Z","steps":["trace[515068569] 'process raft request'  (duration: 167.38001ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T00:15:23.614362Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"170.946291ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-09-27T00:15:23.614399Z","caller":"traceutil/trace.go:171","msg":"trace[1188458304] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:893; }","duration":"170.988114ms","start":"2024-09-27T00:15:23.443406Z","end":"2024-09-27T00:15:23.614394Z","steps":["trace[1188458304] 'agreement among raft nodes before linearized reading'  (duration: 170.911831ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T00:15:23.616352Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.711401ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/csi-attacher\" ","response":"range_response_count:1 size:535"}
	{"level":"info","ts":"2024-09-27T00:15:23.616390Z","caller":"traceutil/trace.go:171","msg":"trace[2134556910] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/csi-attacher; range_end:; response_count:1; response_revision:894; }","duration":"161.751987ms","start":"2024-09-27T00:15:23.454633Z","end":"2024-09-27T00:15:23.616385Z","steps":["trace[2134556910] 'agreement among raft nodes before linearized reading'  (duration: 161.664843ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T00:15:23.616635Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.649958ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T00:15:23.616668Z","caller":"traceutil/trace.go:171","msg":"trace[1046129571] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:894; }","duration":"130.684522ms","start":"2024-09-27T00:15:23.485980Z","end":"2024-09-27T00:15:23.616664Z","steps":["trace[1046129571] 'agreement among raft nodes before linearized reading'  (duration: 130.644703ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T00:15:23.616720Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"150.570096ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T00:15:23.616750Z","caller":"traceutil/trace.go:171","msg":"trace[1081780034] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:894; }","duration":"150.601115ms","start":"2024-09-27T00:15:23.466146Z","end":"2024-09-27T00:15:23.616747Z","steps":["trace[1081780034] 'agreement among raft nodes before linearized reading'  (duration: 150.564884ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T00:15:31.447727Z","caller":"traceutil/trace.go:171","msg":"trace[816210793] transaction","detail":"{read_only:false; response_revision:965; number_of_response:1; }","duration":"119.985258ms","start":"2024-09-27T00:15:31.327692Z","end":"2024-09-27T00:15:31.447678Z","steps":["trace[816210793] 'process raft request'  (duration: 119.869728ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T00:15:35.848220Z","caller":"traceutil/trace.go:171","msg":"trace[558511216] transaction","detail":"{read_only:false; response_revision:975; number_of_response:1; }","duration":"133.684451ms","start":"2024-09-27T00:15:35.714526Z","end":"2024-09-27T00:15:35.848211Z","steps":["trace[558511216] 'process raft request'  (duration: 133.441458ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T00:15:38.202752Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.348655ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T00:15:38.202802Z","caller":"traceutil/trace.go:171","msg":"trace[885607957] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:982; }","duration":"114.409001ms","start":"2024-09-27T00:15:38.088386Z","end":"2024-09-27T00:15:38.202795Z","steps":["trace[885607957] 'range keys from in-memory index tree'  (duration: 114.312652ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T00:18:14.233775Z","caller":"traceutil/trace.go:171","msg":"trace[185964633] transaction","detail":"{read_only:false; response_revision:1500; number_of_response:1; }","duration":"113.302158ms","start":"2024-09-27T00:18:14.120458Z","end":"2024-09-27T00:18:14.233760Z","steps":["trace[185964633] 'process raft request'  (duration: 39.745513ms)","trace[185964633] 'compare'  (duration: 73.490499ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-27T00:18:40.553992Z","caller":"traceutil/trace.go:171","msg":"trace[663935450] transaction","detail":"{read_only:false; response_revision:1569; number_of_response:1; }","duration":"134.932928ms","start":"2024-09-27T00:18:40.419049Z","end":"2024-09-27T00:18:40.553982Z","steps":["trace[663935450] 'process raft request'  (duration: 134.867127ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T00:18:40.554301Z","caller":"traceutil/trace.go:171","msg":"trace[1573377947] linearizableReadLoop","detail":"{readStateIndex:1629; appliedIndex:1629; }","duration":"123.31005ms","start":"2024-09-27T00:18:40.430985Z","end":"2024-09-27T00:18:40.554295Z","steps":["trace[1573377947] 'read index received'  (duration: 123.308117ms)","trace[1573377947] 'applied index is now lower than readState.Index'  (duration: 1.677µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-27T00:18:40.554361Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.364013ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2024-09-27T00:18:40.554373Z","caller":"traceutil/trace.go:171","msg":"trace[272321060] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1569; }","duration":"123.387461ms","start":"2024-09-27T00:18:40.430982Z","end":"2024-09-27T00:18:40.554370Z","steps":["trace[272321060] 'agreement among raft nodes before linearized reading'  (duration: 123.331574ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T00:25:03.437250Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1852}
	{"level":"info","ts":"2024-09-27T00:25:03.492959Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1852,"took":"55.248277ms","hash":2309267453,"current-db-size-bytes":9039872,"current-db-size":"9.0 MB","current-db-size-in-use-bytes":4984832,"current-db-size-in-use":"5.0 MB"}
	{"level":"info","ts":"2024-09-27T00:25:03.493044Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2309267453,"revision":1852,"compact-revision":-1}
	
	
	==> gcp-auth [dc4cd44be944] <==
	2024/09/27 00:18:13 GCP Auth Webhook started!
	2024/09/27 00:18:30 Ready to marshal response ...
	2024/09/27 00:18:30 Ready to write response ...
	2024/09/27 00:18:31 Ready to marshal response ...
	2024/09/27 00:18:31 Ready to write response ...
	2024/09/27 00:18:59 Ready to marshal response ...
	2024/09/27 00:18:59 Ready to write response ...
	2024/09/27 00:18:59 Ready to marshal response ...
	2024/09/27 00:18:59 Ready to write response ...
	2024/09/27 00:18:59 Ready to marshal response ...
	2024/09/27 00:18:59 Ready to write response ...
	2024/09/27 00:27:14 Ready to marshal response ...
	2024/09/27 00:27:14 Ready to write response ...
	2024/09/27 00:27:20 Ready to marshal response ...
	2024/09/27 00:27:20 Ready to write response ...
	2024/09/27 00:27:20 Ready to marshal response ...
	2024/09/27 00:27:20 Ready to write response ...
	2024/09/27 00:27:29 Ready to marshal response ...
	2024/09/27 00:27:29 Ready to write response ...
	
	
	==> kernel <==
	 00:28:16 up 13 min,  0 users,  load average: 0.33, 0.39, 0.36
	Linux addons-433000 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f9988163f4ef] <==
	E0927 00:17:27.398518       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.90.169:443: connect: connection refused" logger="UnhandledError"
	W0927 00:18:08.254098       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.90.169:443: connect: connection refused
	E0927 00:18:08.254121       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.90.169:443: connect: connection refused" logger="UnhandledError"
	I0927 00:18:30.827740       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0927 00:18:30.845312       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	I0927 00:18:49.317424       1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I0927 00:18:49.434331       1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I0927 00:18:49.613895       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0927 00:18:49.660893       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0927 00:18:49.701101       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0927 00:18:49.786915       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0927 00:18:50.100093       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0927 00:18:50.156692       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0927 00:18:50.294915       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0927 00:18:50.650754       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0927 00:18:50.774241       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0927 00:18:50.787164       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0927 00:18:51.074247       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0927 00:18:51.131192       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0927 00:18:51.295668       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0927 00:18:51.388393       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	E0927 00:27:30.565294       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0927 00:27:30.571133       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0927 00:27:30.576546       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0927 00:27:45.578436       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	
	
	==> kube-controller-manager [ea086a0c0ec8] <==
	W0927 00:27:07.701176       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:27:07.701256       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0927 00:27:09.573144       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="3.151µs"
	I0927 00:27:19.664893       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	W0927 00:27:20.861780       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:27:20.861904       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0927 00:27:29.938827       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-86d989889c" duration="2.286µs"
	W0927 00:27:32.896180       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:27:32.896259       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:27:34.124314       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:27:34.124492       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:27:37.633469       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:27:37.633641       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:27:39.365016       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:27:39.365244       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0927 00:27:43.761647       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-433000"
	W0927 00:27:52.151165       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:27:52.151248       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:28:02.925769       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:28:02.925846       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:28:04.518982       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:28:04.519059       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:28:07.860658       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:28:07.860810       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0927 00:28:14.559115       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="2.32µs"
	
	
	==> kube-proxy [af08ebacd51a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0927 00:15:15.571473       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0927 00:15:15.577676       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.2"]
	E0927 00:15:15.577711       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 00:15:15.714330       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0927 00:15:15.714360       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0927 00:15:15.714402       1 server_linux.go:169] "Using iptables Proxier"
	I0927 00:15:15.717038       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 00:15:15.718426       1 server.go:483] "Version info" version="v1.31.1"
	I0927 00:15:15.718436       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 00:15:15.719470       1 config.go:199] "Starting service config controller"
	I0927 00:15:15.719481       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 00:15:15.719507       1 config.go:105] "Starting endpoint slice config controller"
	I0927 00:15:15.719512       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 00:15:15.719758       1 config.go:328] "Starting node config controller"
	I0927 00:15:15.719763       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 00:15:15.820316       1 shared_informer.go:320] Caches are synced for node config
	I0927 00:15:15.820339       1 shared_informer.go:320] Caches are synced for service config
	I0927 00:15:15.820366       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [c7c4cd6fafde] <==
	W0927 00:15:04.932387       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0927 00:15:04.932440       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:04.932585       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0927 00:15:04.933023       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:04.932593       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0927 00:15:04.933140       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:04.932643       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0927 00:15:04.933185       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:04.932680       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0927 00:15:04.933261       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:04.932715       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0927 00:15:04.933305       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:04.932741       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0927 00:15:04.933432       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:04.932772       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0927 00:15:04.933446       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:04.932804       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0927 00:15:04.933565       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:05.869332       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0927 00:15:05.869374       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0927 00:15:05.904892       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0927 00:15:05.905094       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:05.976138       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0927 00:15:05.976300       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0927 00:15:08.520157       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 27 00:28:14 addons-433000 kubelet[2005]: I0927 00:28:14.012930    2005 scope.go:117] "RemoveContainer" containerID="d964cdb23bff30ed0e0316d903420c9dfd32b837555aa12a445e9ed1ed14cbe5"
	Sep 27 00:28:14 addons-433000 kubelet[2005]: E0927 00:28:14.013172    2005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-9kgzh_gadget(eb998461-5502-49d4-920e-a1444d6865f5)\"" pod="gadget/gadget-9kgzh" podUID="eb998461-5502-49d4-920e-a1444d6865f5"
	Sep 27 00:28:14 addons-433000 kubelet[2005]: I0927 00:28:14.385056    2005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/3145ab45-f423-495d-b730-f85d124926fe-gcp-creds\") pod \"3145ab45-f423-495d-b730-f85d124926fe\" (UID: \"3145ab45-f423-495d-b730-f85d124926fe\") "
	Sep 27 00:28:14 addons-433000 kubelet[2005]: I0927 00:28:14.385104    2005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5nbp\" (UniqueName: \"kubernetes.io/projected/3145ab45-f423-495d-b730-f85d124926fe-kube-api-access-t5nbp\") pod \"3145ab45-f423-495d-b730-f85d124926fe\" (UID: \"3145ab45-f423-495d-b730-f85d124926fe\") "
	Sep 27 00:28:14 addons-433000 kubelet[2005]: I0927 00:28:14.385260    2005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3145ab45-f423-495d-b730-f85d124926fe-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "3145ab45-f423-495d-b730-f85d124926fe" (UID: "3145ab45-f423-495d-b730-f85d124926fe"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 27 00:28:14 addons-433000 kubelet[2005]: I0927 00:28:14.391076    2005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3145ab45-f423-495d-b730-f85d124926fe-kube-api-access-t5nbp" (OuterVolumeSpecName: "kube-api-access-t5nbp") pod "3145ab45-f423-495d-b730-f85d124926fe" (UID: "3145ab45-f423-495d-b730-f85d124926fe"). InnerVolumeSpecName "kube-api-access-t5nbp". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 27 00:28:14 addons-433000 kubelet[2005]: I0927 00:28:14.485846    2005 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/3145ab45-f423-495d-b730-f85d124926fe-gcp-creds\") on node \"addons-433000\" DevicePath \"\""
	Sep 27 00:28:14 addons-433000 kubelet[2005]: I0927 00:28:14.485887    2005 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-t5nbp\" (UniqueName: \"kubernetes.io/projected/3145ab45-f423-495d-b730-f85d124926fe-kube-api-access-t5nbp\") on node \"addons-433000\" DevicePath \"\""
	Sep 27 00:28:14 addons-433000 kubelet[2005]: I0927 00:28:14.989614    2005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c67vh\" (UniqueName: \"kubernetes.io/projected/b49ae8a8-4cbc-4a75-8913-e8be3cc60c32-kube-api-access-c67vh\") pod \"b49ae8a8-4cbc-4a75-8913-e8be3cc60c32\" (UID: \"b49ae8a8-4cbc-4a75-8913-e8be3cc60c32\") "
	Sep 27 00:28:14 addons-433000 kubelet[2005]: I0927 00:28:14.989647    2005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5kvg\" (UniqueName: \"kubernetes.io/projected/516b6f7b-4fac-4c3f-b845-0484389422ee-kube-api-access-r5kvg\") pod \"516b6f7b-4fac-4c3f-b845-0484389422ee\" (UID: \"516b6f7b-4fac-4c3f-b845-0484389422ee\") "
	Sep 27 00:28:14 addons-433000 kubelet[2005]: I0927 00:28:14.991442    2005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b49ae8a8-4cbc-4a75-8913-e8be3cc60c32-kube-api-access-c67vh" (OuterVolumeSpecName: "kube-api-access-c67vh") pod "b49ae8a8-4cbc-4a75-8913-e8be3cc60c32" (UID: "b49ae8a8-4cbc-4a75-8913-e8be3cc60c32"). InnerVolumeSpecName "kube-api-access-c67vh". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 27 00:28:14 addons-433000 kubelet[2005]: I0927 00:28:14.991693    2005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/516b6f7b-4fac-4c3f-b845-0484389422ee-kube-api-access-r5kvg" (OuterVolumeSpecName: "kube-api-access-r5kvg") pod "516b6f7b-4fac-4c3f-b845-0484389422ee" (UID: "516b6f7b-4fac-4c3f-b845-0484389422ee"). InnerVolumeSpecName "kube-api-access-r5kvg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 27 00:28:15 addons-433000 kubelet[2005]: I0927 00:28:15.023329    2005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3145ab45-f423-495d-b730-f85d124926fe" path="/var/lib/kubelet/pods/3145ab45-f423-495d-b730-f85d124926fe/volumes"
	Sep 27 00:28:15 addons-433000 kubelet[2005]: I0927 00:28:15.090402    2005 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-c67vh\" (UniqueName: \"kubernetes.io/projected/b49ae8a8-4cbc-4a75-8913-e8be3cc60c32-kube-api-access-c67vh\") on node \"addons-433000\" DevicePath \"\""
	Sep 27 00:28:15 addons-433000 kubelet[2005]: I0927 00:28:15.090536    2005 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-r5kvg\" (UniqueName: \"kubernetes.io/projected/516b6f7b-4fac-4c3f-b845-0484389422ee-kube-api-access-r5kvg\") on node \"addons-433000\" DevicePath \"\""
	Sep 27 00:28:15 addons-433000 kubelet[2005]: I0927 00:28:15.417207    2005 scope.go:117] "RemoveContainer" containerID="63b065ca61806330ae63695f85a698d367c40069acb6178ccca7450c8ec1a9f3"
	Sep 27 00:28:15 addons-433000 kubelet[2005]: I0927 00:28:15.462592    2005 scope.go:117] "RemoveContainer" containerID="63b065ca61806330ae63695f85a698d367c40069acb6178ccca7450c8ec1a9f3"
	Sep 27 00:28:15 addons-433000 kubelet[2005]: E0927 00:28:15.463532    2005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 63b065ca61806330ae63695f85a698d367c40069acb6178ccca7450c8ec1a9f3" containerID="63b065ca61806330ae63695f85a698d367c40069acb6178ccca7450c8ec1a9f3"
	Sep 27 00:28:15 addons-433000 kubelet[2005]: I0927 00:28:15.463598    2005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"63b065ca61806330ae63695f85a698d367c40069acb6178ccca7450c8ec1a9f3"} err="failed to get container status \"63b065ca61806330ae63695f85a698d367c40069acb6178ccca7450c8ec1a9f3\": rpc error: code = Unknown desc = Error response from daemon: No such container: 63b065ca61806330ae63695f85a698d367c40069acb6178ccca7450c8ec1a9f3"
	Sep 27 00:28:15 addons-433000 kubelet[2005]: I0927 00:28:15.463645    2005 scope.go:117] "RemoveContainer" containerID="cf81f3542fdd4d37613634a53ff0d4c92f57f22196c7237c21a017cedd149c59"
	Sep 27 00:28:15 addons-433000 kubelet[2005]: I0927 00:28:15.477620    2005 scope.go:117] "RemoveContainer" containerID="cf81f3542fdd4d37613634a53ff0d4c92f57f22196c7237c21a017cedd149c59"
	Sep 27 00:28:15 addons-433000 kubelet[2005]: E0927 00:28:15.478682    2005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: cf81f3542fdd4d37613634a53ff0d4c92f57f22196c7237c21a017cedd149c59" containerID="cf81f3542fdd4d37613634a53ff0d4c92f57f22196c7237c21a017cedd149c59"
	Sep 27 00:28:15 addons-433000 kubelet[2005]: I0927 00:28:15.478705    2005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"cf81f3542fdd4d37613634a53ff0d4c92f57f22196c7237c21a017cedd149c59"} err="failed to get container status \"cf81f3542fdd4d37613634a53ff0d4c92f57f22196c7237c21a017cedd149c59\": rpc error: code = Unknown desc = Error response from daemon: No such container: cf81f3542fdd4d37613634a53ff0d4c92f57f22196c7237c21a017cedd149c59"
	Sep 27 00:28:17 addons-433000 kubelet[2005]: I0927 00:28:17.022308    2005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="516b6f7b-4fac-4c3f-b845-0484389422ee" path="/var/lib/kubelet/pods/516b6f7b-4fac-4c3f-b845-0484389422ee/volumes"
	Sep 27 00:28:17 addons-433000 kubelet[2005]: I0927 00:28:17.022615    2005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b49ae8a8-4cbc-4a75-8913-e8be3cc60c32" path="/var/lib/kubelet/pods/b49ae8a8-4cbc-4a75-8913-e8be3cc60c32/volumes"
	
	
	==> storage-provisioner [698a8a1c9783] <==
	I0927 00:15:19.239675       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0927 00:15:19.255325       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0927 00:15:19.255374       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0927 00:15:19.272705       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0927 00:15:19.272850       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-433000_e22c6cda-89c5-4716-9987-abb7dd777eda!
	I0927 00:15:19.281222       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"28831f64-e27e-49d7-a6a2-955ce0658f31", APIVersion:"v1", ResourceVersion:"634", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-433000_e22c6cda-89c5-4716-9987-abb7dd777eda became leader
	I0927 00:15:19.374267       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-433000_e22c6cda-89c5-4716-9987-abb7dd777eda!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p addons-433000 -n addons-433000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-433000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-zqknl ingress-nginx-admission-patch-99n4k
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-433000 describe pod busybox ingress-nginx-admission-create-zqknl ingress-nginx-admission-patch-99n4k
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-433000 describe pod busybox ingress-nginx-admission-create-zqknl ingress-nginx-admission-patch-99n4k: exit status 1 (57.244585ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-433000/192.169.0.2
	Start Time:       Thu, 26 Sep 2024 17:18:59 -0700
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vcv9s (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vcv9s:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m18s                  default-scheduler  Successfully assigned default/busybox to addons-433000
	  Normal   Pulling    7m45s (x4 over 9m17s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m45s (x4 over 9m17s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m45s (x4 over 9m17s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m34s (x6 over 9m17s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m9s (x21 over 9m17s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-zqknl" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-99n4k" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-433000 describe pod busybox ingress-nginx-admission-create-zqknl ingress-nginx-admission-patch-99n4k: exit status 1
--- FAIL: TestAddons/parallel/Registry (74.63s)

                                                
                                    
x
+
TestCertOptions (251.88s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-657000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit 
E0926 18:35:20.037632    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/functional-748000/client.crt: no such file or directory" logger="UnhandledError"
E0926 18:35:29.670246    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/skaffold-729000/client.crt: no such file or directory" logger="UnhandledError"
E0926 18:35:36.957467    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/functional-748000/client.crt: no such file or directory" logger="UnhandledError"
E0926 18:35:57.388194    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/skaffold-729000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-options-657000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit : exit status 80 (4m6.196195083s)

                                                
                                                
-- stdout --
	* [cert-options-657000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1128/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1128/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "cert-options-657000" primary control-plane node in "cert-options-657000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "cert-options-657000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 72:34:1b:a7:c5:ee
	* Failed to start hyperkit VM. Running "minikube delete -p cert-options-657000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 3a:9:a1:d3:8d:5e
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 3a:9:a1:d3:8d:5e
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-options-657000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-657000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p cert-options-657000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 50 (163.383081ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-657000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-amd64 -p cert-options-657000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 50
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-657000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-657000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p cert-options-657000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 50 (161.194612ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-657000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-amd64 ssh -p cert-options-657000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 50
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-657000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-09-26 18:38:23.588109 -0700 PDT m=+5066.317880800
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-657000 -n cert-options-657000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-657000 -n cert-options-657000: exit status 7 (78.269695ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0926 18:38:23.664761    6711 status.go:386] failed to get driver ip: getting IP: IP address is not set
	E0926 18:38:23.664783    6711 status.go:119] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-657000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "cert-options-657000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-657000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-657000: (5.238308528s)
--- FAIL: TestCertOptions (251.88s)

                                                
                                    
x
+
TestCertExpiration (1739.32s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-068000 --memory=2048 --cert-expiration=3m --driver=hyperkit 
E0926 18:33:14.524332    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-expiration-068000 --memory=2048 --cert-expiration=3m --driver=hyperkit : exit status 80 (4m6.438840019s)

                                                
                                                
-- stdout --
	* [cert-expiration-068000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1128/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1128/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "cert-expiration-068000" primary control-plane node in "cert-expiration-068000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "cert-expiration-068000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for d6:f8:b7:50:0:b5
	* Failed to start hyperkit VM. Running "minikube delete -p cert-expiration-068000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for d2:1c:7e:38:95:72
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for d2:1c:7e:38:95:72
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-expiration-068000 --memory=2048 --cert-expiration=3m --driver=hyperkit " : exit status 80
E0926 18:38:14.528777    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-068000 --memory=2048 --cert-expiration=8760h --driver=hyperkit 
E0926 18:40:29.674106    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/skaffold-729000/client.crt: no such file or directory" logger="UnhandledError"
E0926 18:40:36.961613    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/functional-748000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-expiration-068000 --memory=2048 --cert-expiration=8760h --driver=hyperkit : exit status 80 (21m47.535049729s)

                                                
                                                
-- stdout --
	* [cert-expiration-068000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1128/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1128/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "cert-expiration-068000" primary control-plane node in "cert-expiration-068000" cluster
	* Updating the running hyperkit "cert-expiration-068000" VM ...
	* Updating the running hyperkit "cert-expiration-068000" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* Failed to start hyperkit VM. Running "minikube delete -p cert-expiration-068000" may fix it: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-amd64 start -p cert-expiration-068000 --memory=2048 --cert-expiration=8760h --driver=hyperkit " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-068000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1128/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1128/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "cert-expiration-068000" primary control-plane node in "cert-expiration-068000" cluster
	* Updating the running hyperkit "cert-expiration-068000" VM ...
	* Updating the running hyperkit "cert-expiration-068000" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* Failed to start hyperkit VM. Running "minikube delete -p cert-expiration-068000" may fix it: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-09-26 19:02:07.799103 -0700 PDT m=+6490.451933110
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-expiration-068000 -n cert-expiration-068000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-expiration-068000 -n cert-expiration-068000: exit status 7 (80.664867ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0926 19:02:07.877827    8201 status.go:386] failed to get driver ip: getting IP: IP address is not set
	E0926 19:02:07.877851    8201 status.go:119] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-068000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "cert-expiration-068000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-068000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-068000: (5.266462777s)
--- FAIL: TestCertExpiration (1739.32s)

                                                
                                    
x
+
TestDockerFlags (252.41s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-309000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit 
E0926 18:30:29.667660    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/skaffold-729000/client.crt: no such file or directory" logger="UnhandledError"
E0926 18:30:29.674542    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/skaffold-729000/client.crt: no such file or directory" logger="UnhandledError"
E0926 18:30:29.687719    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/skaffold-729000/client.crt: no such file or directory" logger="UnhandledError"
E0926 18:30:29.711080    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/skaffold-729000/client.crt: no such file or directory" logger="UnhandledError"
E0926 18:30:29.752954    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/skaffold-729000/client.crt: no such file or directory" logger="UnhandledError"
E0926 18:30:29.836322    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/skaffold-729000/client.crt: no such file or directory" logger="UnhandledError"
E0926 18:30:29.999684    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/skaffold-729000/client.crt: no such file or directory" logger="UnhandledError"
E0926 18:30:30.323112    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/skaffold-729000/client.crt: no such file or directory" logger="UnhandledError"
E0926 18:30:30.965100    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/skaffold-729000/client.crt: no such file or directory" logger="UnhandledError"
E0926 18:30:32.247573    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/skaffold-729000/client.crt: no such file or directory" logger="UnhandledError"
E0926 18:30:34.811059    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/skaffold-729000/client.crt: no such file or directory" logger="UnhandledError"
E0926 18:30:36.956821    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/functional-748000/client.crt: no such file or directory" logger="UnhandledError"
E0926 18:30:39.934595    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/skaffold-729000/client.crt: no such file or directory" logger="UnhandledError"
E0926 18:30:50.177914    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/skaffold-729000/client.crt: no such file or directory" logger="UnhandledError"
E0926 18:31:10.659722    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/skaffold-729000/client.crt: no such file or directory" logger="UnhandledError"
E0926 18:31:51.621773    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/skaffold-729000/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p docker-flags-309000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (4m6.667499639s)

                                                
                                                
-- stdout --
	* [docker-flags-309000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1128/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1128/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "docker-flags-309000" primary control-plane node in "docker-flags-309000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "docker-flags-309000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 18:30:04.667262    6531 out.go:345] Setting OutFile to fd 1 ...
	I0926 18:30:04.667523    6531 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:30:04.667528    6531 out.go:358] Setting ErrFile to fd 2...
	I0926 18:30:04.667532    6531 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:30:04.667712    6531 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1128/.minikube/bin
	I0926 18:30:04.669300    6531 out.go:352] Setting JSON to false
	I0926 18:30:04.692632    6531 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":5374,"bootTime":1727395230,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0926 18:30:04.692789    6531 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 18:30:04.714959    6531 out.go:177] * [docker-flags-309000] minikube v1.34.0 on Darwin 14.6.1
	I0926 18:30:04.756234    6531 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 18:30:04.756271    6531 notify.go:220] Checking for updates...
	I0926 18:30:04.798205    6531 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1128/kubeconfig
	I0926 18:30:04.819186    6531 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0926 18:30:04.840007    6531 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 18:30:04.861260    6531 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1128/.minikube
	I0926 18:30:04.882229    6531 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 18:30:04.903435    6531 config.go:182] Loaded profile config "force-systemd-flag-396000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 18:30:04.903526    6531 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 18:30:04.932259    6531 out.go:177] * Using the hyperkit driver based on user configuration
	I0926 18:30:04.974042    6531 start.go:297] selected driver: hyperkit
	I0926 18:30:04.974055    6531 start.go:901] validating driver "hyperkit" against <nil>
	I0926 18:30:04.974068    6531 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 18:30:04.976994    6531 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:30:04.977123    6531 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19711-1128/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0926 18:30:04.985446    6531 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0926 18:30:04.989320    6531 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 18:30:04.989343    6531 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0926 18:30:04.989371    6531 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0926 18:30:04.989611    6531 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0926 18:30:04.989649    6531 cni.go:84] Creating CNI manager for ""
	I0926 18:30:04.989685    6531 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 18:30:04.989690    6531 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0926 18:30:04.989749    6531 start.go:340] cluster config:
	{Name:docker-flags-309000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-309000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 18:30:04.989837    6531 iso.go:125] acquiring lock: {Name:mka8a9c5a237c1e4ae233281d2ff7965d13a843d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:30:05.031168    6531 out.go:177] * Starting "docker-flags-309000" primary control-plane node in "docker-flags-309000" cluster
	I0926 18:30:05.051959    6531 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 18:30:05.051998    6531 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0926 18:30:05.052014    6531 cache.go:56] Caching tarball of preloaded images
	I0926 18:30:05.052131    6531 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0926 18:30:05.052141    6531 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 18:30:05.052228    6531 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/docker-flags-309000/config.json ...
	I0926 18:30:05.052247    6531 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/docker-flags-309000/config.json: {Name:mk555b24bff386ae9c6c6d79c9c83372a7ad9001 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 18:30:05.052547    6531 start.go:360] acquireMachinesLock for docker-flags-309000: {Name:mk62b0ec9b6170d1971239573141a625c4a2621e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:31:01.882687    6531 start.go:364] duration metric: took 56.829603395s to acquireMachinesLock for "docker-flags-309000"
	I0926 18:31:01.882744    6531 start.go:93] Provisioning new machine with config: &{Name:docker-flags-309000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSH
Key: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-309000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 18:31:01.882809    6531 start.go:125] createHost starting for "" (driver="hyperkit")
	I0926 18:31:01.906294    6531 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0926 18:31:01.906443    6531 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 18:31:01.906476    6531 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 18:31:01.914964    6531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53867
	I0926 18:31:01.915304    6531 main.go:141] libmachine: () Calling .GetVersion
	I0926 18:31:01.915711    6531 main.go:141] libmachine: Using API Version  1
	I0926 18:31:01.915721    6531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 18:31:01.915964    6531 main.go:141] libmachine: () Calling .GetMachineName
	I0926 18:31:01.916092    6531 main.go:141] libmachine: (docker-flags-309000) Calling .GetMachineName
	I0926 18:31:01.916196    6531 main.go:141] libmachine: (docker-flags-309000) Calling .DriverName
	I0926 18:31:01.916333    6531 start.go:159] libmachine.API.Create for "docker-flags-309000" (driver="hyperkit")
	I0926 18:31:01.916375    6531 client.go:168] LocalClient.Create starting
	I0926 18:31:01.916414    6531 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem
	I0926 18:31:01.916464    6531 main.go:141] libmachine: Decoding PEM data...
	I0926 18:31:01.916479    6531 main.go:141] libmachine: Parsing certificate...
	I0926 18:31:01.916535    6531 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem
	I0926 18:31:01.916579    6531 main.go:141] libmachine: Decoding PEM data...
	I0926 18:31:01.916589    6531 main.go:141] libmachine: Parsing certificate...
	I0926 18:31:01.916606    6531 main.go:141] libmachine: Running pre-create checks...
	I0926 18:31:01.916612    6531 main.go:141] libmachine: (docker-flags-309000) Calling .PreCreateCheck
	I0926 18:31:01.916717    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:31:01.916868    6531 main.go:141] libmachine: (docker-flags-309000) Calling .GetConfigRaw
	I0926 18:31:02.006803    6531 main.go:141] libmachine: Creating machine...
	I0926 18:31:02.006812    6531 main.go:141] libmachine: (docker-flags-309000) Calling .Create
	I0926 18:31:02.006927    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:31:02.007082    6531 main.go:141] libmachine: (docker-flags-309000) DBG | I0926 18:31:02.006919    6550 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19711-1128/.minikube
	I0926 18:31:02.007153    6531 main.go:141] libmachine: (docker-flags-309000) Downloading /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1128/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0926 18:31:02.193852    6531 main.go:141] libmachine: (docker-flags-309000) DBG | I0926 18:31:02.193770    6550 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000/id_rsa...
	I0926 18:31:02.412634    6531 main.go:141] libmachine: (docker-flags-309000) DBG | I0926 18:31:02.412555    6550 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000/docker-flags-309000.rawdisk...
	I0926 18:31:02.412644    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Writing magic tar header
	I0926 18:31:02.412657    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Writing SSH key tar header
	I0926 18:31:02.413187    6531 main.go:141] libmachine: (docker-flags-309000) DBG | I0926 18:31:02.413148    6550 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000 ...
	I0926 18:31:02.782808    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:31:02.782849    6531 main.go:141] libmachine: (docker-flags-309000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000/hyperkit.pid
	I0926 18:31:02.782882    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Using UUID 1625e84d-9be8-4c7f-b100-aefddb7ab8b9
	I0926 18:31:02.809561    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Generated MAC 26:a4:1a:ce:79:b6
	I0926 18:31:02.809580    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-309000
	I0926 18:31:02.809628    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:31:02 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1625e84d-9be8-4c7f-b100-aefddb7ab8b9", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00019a630)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0926 18:31:02.809654    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:31:02 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1625e84d-9be8-4c7f-b100-aefddb7ab8b9", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00019a630)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0926 18:31:02.809732    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:31:02 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "1625e84d-9be8-4c7f-b100-aefddb7ab8b9", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000/docker-flags-309000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000/bzimage,/Users/jenkins/m
inikube-integration/19711-1128/.minikube/machines/docker-flags-309000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-309000"}
	I0926 18:31:02.809774    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:31:02 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 1625e84d-9be8-4c7f-b100-aefddb7ab8b9 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000/docker-flags-309000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000/console-ring -f kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000/bzimage,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags
-309000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-309000"
	I0926 18:31:02.809797    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:31:02 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0926 18:31:02.812634    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:31:02 DEBUG: hyperkit: Pid is 6551
	I0926 18:31:02.813158    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 0
	I0926 18:31:02.813174    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:31:02.813209    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6551
	I0926 18:31:02.814281    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for 26:a4:1a:ce:79:b6 in /var/db/dhcpd_leases ...
	I0926 18:31:02.814325    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:31:02.814346    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:31:02.814380    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:31:02.814395    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:31:02.814413    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:31:02.814431    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:31:02.814446    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:31:02.814462    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:31:02.814478    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:31:02.814492    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:31:02.814531    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:31:02.814560    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:31:02.814575    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:31:02.814598    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:31:02.814614    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:31:02.814633    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:31:02.814647    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:31:02.814659    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:31:02.814672    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:31:02.820419    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:31:02 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I0926 18:31:02.828394    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:31:02 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0926 18:31:02.829155    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:31:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 18:31:02.829179    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:31:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 18:31:02.829202    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:31:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 18:31:02.829215    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:31:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 18:31:03.209228    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:31:03 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0926 18:31:03.209242    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:31:03 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0926 18:31:03.323781    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:31:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 18:31:03.323796    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:31:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 18:31:03.323807    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:31:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 18:31:03.323819    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:31:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 18:31:03.324692    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:31:03 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0926 18:31:03.324703    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:31:03 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0926 18:31:04.815266    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 1
	I0926 18:31:04.815282    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:31:04.815365    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6551
	I0926 18:31:04.816215    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for 26:a4:1a:ce:79:b6 in /var/db/dhcpd_leases ...
	I0926 18:31:04.816274    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:31:04.816286    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:31:04.816301    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:31:04.816310    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:31:04.816320    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:31:04.816328    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:31:04.816336    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:31:04.816360    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:31:04.816372    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:31:04.816379    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:31:04.816386    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:31:04.816392    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:31:04.816411    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:31:04.816432    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:31:04.816450    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:31:04.816462    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:31:04.816469    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:31:04.816476    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:31:04.816485    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:31:06.816548    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 2
	I0926 18:31:06.816566    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:31:06.816639    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6551
	I0926 18:31:06.817547    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for 26:a4:1a:ce:79:b6 in /var/db/dhcpd_leases ...
	I0926 18:31:06.817616    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:31:06.817628    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:31:06.817646    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:31:06.817655    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:31:06.817667    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:31:06.817679    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:31:06.817688    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:31:06.817693    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:31:06.817711    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:31:06.817722    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:31:06.817731    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:31:06.817739    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:31:06.817752    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:31:06.817763    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:31:06.817770    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:31:06.817777    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:31:06.817783    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:31:06.817792    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:31:06.817800    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:31:08.774222    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:31:08 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0926 18:31:08.774394    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:31:08 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0926 18:31:08.774405    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:31:08 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0926 18:31:08.794361    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:31:08 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0926 18:31:08.818533    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 3
	I0926 18:31:08.818555    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:31:08.818771    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6551
	I0926 18:31:08.820205    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for 26:a4:1a:ce:79:b6 in /var/db/dhcpd_leases ...
	I0926 18:31:08.820335    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:31:08.820348    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:31:08.820360    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:31:08.820367    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:31:08.820376    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:31:08.820384    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:31:08.820409    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:31:08.820423    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:31:08.820433    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:31:08.820444    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:31:08.820462    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:31:08.820474    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:31:08.820486    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:31:08.820496    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:31:08.820510    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:31:08.820529    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:31:08.820546    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:31:08.820567    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:31:08.820592    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:31:10.822343    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 4
	I0926 18:31:10.822359    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:31:10.822439    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6551
	I0926 18:31:10.823237    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for 26:a4:1a:ce:79:b6 in /var/db/dhcpd_leases ...
	I0926 18:31:10.823307    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:31:10.823317    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:31:10.823340    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:31:10.823357    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:31:10.823364    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:31:10.823379    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:31:10.823386    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:31:10.823394    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:31:10.823401    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:31:10.823409    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:31:10.823415    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:31:10.823434    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:31:10.823440    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:31:10.823446    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:31:10.823453    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:31:10.823460    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:31:10.823474    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:31:10.823485    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:31:10.823493    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:31:12.823966    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 5
	I0926 18:31:12.823978    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:31:12.824018    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6551
	I0926 18:31:12.824810    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for 26:a4:1a:ce:79:b6 in /var/db/dhcpd_leases ...
	I0926 18:31:12.824868    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:31:12.824879    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:31:12.824902    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:31:12.824913    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:31:12.824923    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:31:12.824931    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:31:12.824941    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:31:12.824949    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:31:12.824956    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:31:12.824963    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:31:12.824979    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:31:12.824989    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:31:12.825000    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:31:12.825008    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:31:12.825015    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:31:12.825022    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:31:12.825039    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:31:12.825048    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:31:12.825057    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:31:14.827087    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 6
	I0926 18:31:14.827101    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:31:14.827157    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6551
	I0926 18:31:14.828001    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for 26:a4:1a:ce:79:b6 in /var/db/dhcpd_leases ...
	I0926 18:31:14.828055    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:31:14.828069    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:31:14.828076    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:31:14.828082    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:31:14.828091    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:31:14.828101    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:31:14.828109    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:31:14.828131    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:31:14.828144    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:31:14.828152    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:31:14.828160    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:31:14.828167    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:31:14.828173    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:31:14.828179    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:31:14.828186    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:31:14.828212    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:31:14.828227    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:31:14.828234    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:31:14.828242    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:31:16.828416    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 7
	I0926 18:31:16.828431    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:31:16.828499    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6551
	I0926 18:31:16.829288    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for 26:a4:1a:ce:79:b6 in /var/db/dhcpd_leases ...
	I0926 18:31:16.829352    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:31:16.829361    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:31:16.829368    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:31:16.829374    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:31:16.829380    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:31:16.829390    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:31:16.829416    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:31:16.829431    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:31:16.829451    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:31:16.829458    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:31:16.829465    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:31:16.829476    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:31:16.829484    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:31:16.829491    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:31:16.829506    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:31:16.829519    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:31:16.829542    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:31:16.829557    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:31:16.829571    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:31:18.831604    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 8
	I0926 18:31:18.831620    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:31:18.831646    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6551
	I0926 18:31:18.832641    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for 26:a4:1a:ce:79:b6 in /var/db/dhcpd_leases ...
	I0926 18:31:18.832701    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:31:18.832712    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:31:18.832719    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:31:18.832724    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:31:18.832738    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:31:18.832752    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:31:18.832760    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:31:18.832766    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:31:18.832783    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:31:18.832795    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:31:18.832802    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:31:18.832810    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:31:18.832817    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:31:18.832824    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:31:18.832845    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:31:18.832857    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:31:18.832865    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:31:18.832871    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:31:18.832889    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:31:20.833540    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 9
	I0926 18:31:20.833554    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:31:20.833620    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6551
	I0926 18:31:20.834425    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for 26:a4:1a:ce:79:b6 in /var/db/dhcpd_leases ...
	I0926 18:31:20.834473    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:31:20.834483    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:31:20.834491    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:31:20.834499    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:31:20.834505    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:31:20.834511    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:31:20.834516    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:31:20.834522    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:31:20.834528    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:31:20.834534    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:31:20.834541    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:31:20.834561    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:31:20.834572    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:31:20.834584    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:31:20.834593    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:31:20.834599    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:31:20.834605    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:31:20.834610    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:31:20.834616    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:31:22.835175    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 10
	I0926 18:31:22.835187    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:31:22.835251    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6551
	I0926 18:31:22.836085    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for 26:a4:1a:ce:79:b6 in /var/db/dhcpd_leases ...
	I0926 18:31:22.836131    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:31:22.836141    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:31:22.836152    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:31:22.836160    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:31:22.836167    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:31:22.836172    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:31:22.836186    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:31:22.836211    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:31:22.836223    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:31:22.836237    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:31:22.836244    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:31:22.836251    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:31:22.836257    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:31:22.836266    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:31:22.836277    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:31:22.836285    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:31:22.836301    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:31:22.836308    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:31:22.836316    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:31:24.838347    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 11
	I0926 18:31:24.838365    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:31:24.838426    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6551
	I0926 18:31:24.839284    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for 26:a4:1a:ce:79:b6 in /var/db/dhcpd_leases ...
	I0926 18:31:24.839336    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:31:24.839358    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:31:24.839375    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:31:24.839388    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:31:24.839405    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:31:24.839416    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:31:24.839430    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:31:24.839437    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:31:24.839444    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:31:24.839449    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:31:24.839456    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:31:24.839469    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:31:24.839482    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:31:24.839492    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:31:24.839500    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:31:24.839508    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:31:24.839521    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:31:24.839529    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:31:24.839536    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:31:26.841515    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 12
	I0926 18:31:26.841529    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:31:26.841583    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6551
	I0926 18:31:26.842493    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for 26:a4:1a:ce:79:b6 in /var/db/dhcpd_leases ...
	I0926 18:31:26.842543    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:31:26.842556    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:31:26.842567    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:31:26.842572    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:31:26.842586    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:31:26.842599    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:31:26.842614    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:31:26.842626    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:31:26.842633    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:31:26.842642    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:31:26.842649    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:31:26.842654    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:31:26.842669    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:31:26.842678    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:31:26.842692    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:31:26.842701    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:31:26.842710    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:31:26.842716    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:31:26.842732    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:31:28.843570    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 13
	I0926 18:31:28.843585    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:31:28.843636    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6551
	I0926 18:31:28.844450    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for 26:a4:1a:ce:79:b6 in /var/db/dhcpd_leases ...
	I0926 18:31:28.844491    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:31:28.844499    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:31:28.844509    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:31:28.844515    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:31:28.844531    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:31:28.844555    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:31:28.844563    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:31:28.844570    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:31:28.844578    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:31:28.844587    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:31:28.844594    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:31:28.844601    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:31:28.844609    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:31:28.844626    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:31:28.844640    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:31:28.844647    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:31:28.844655    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:31:28.844663    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:31:28.844677    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:31:30.844829    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 14
	I0926 18:31:30.844841    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:31:30.844902    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6551
	I0926 18:31:30.845744    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for 26:a4:1a:ce:79:b6 in /var/db/dhcpd_leases ...
	I0926 18:31:30.845815    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:31:30.845824    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:31:30.845832    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:31:30.845839    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:31:30.845847    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:31:30.845854    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:31:30.845887    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:31:30.845899    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:31:30.845909    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:31:30.845916    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:31:30.845923    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:31:30.845931    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:31:30.845938    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:31:30.845946    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:31:30.845954    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:31:30.845959    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:31:30.845984    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:31:30.845997    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:31:30.846022    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:31:32.847983    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 15
	I0926 18:31:32.847996    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:31:32.848070    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6551
	I0926 18:31:32.848873    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for 26:a4:1a:ce:79:b6 in /var/db/dhcpd_leases ...
	I0926 18:31:32.848921    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:31:32.848952    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:31:32.848963    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:31:32.848971    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:31:32.848981    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:31:32.848989    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:31:32.849004    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:31:32.849012    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:31:32.849019    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:31:32.849038    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:31:32.849051    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:31:32.849069    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:31:32.849078    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:31:32.849084    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:31:32.849089    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:31:32.849100    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:31:32.849107    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:31:32.849115    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:31:32.849134    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:31:34.851161    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 16
	I0926 18:31:34.851185    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:31:34.851245    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6551
	I0926 18:31:34.852027    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for 26:a4:1a:ce:79:b6 in /var/db/dhcpd_leases ...
	I0926 18:31:34.852071    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:31:34.852082    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:31:34.852092    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:31:34.852110    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:31:34.852119    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:31:34.852144    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:31:34.852155    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:31:34.852162    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:31:34.852170    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:31:34.852176    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:31:34.852182    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:31:34.852190    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:31:34.852198    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:31:34.852223    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:31:34.852240    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:31:34.852249    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:31:34.852268    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:31:34.852309    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:31:34.852319    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:31:36.854296    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 17
	I0926 18:31:36.854307    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:31:36.854362    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6551
	I0926 18:31:36.855140    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for 26:a4:1a:ce:79:b6 in /var/db/dhcpd_leases ...
	I0926 18:31:36.855190    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:31:36.855199    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:31:36.855214    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:31:36.855224    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:31:36.855233    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:31:36.855240    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:31:36.855246    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:31:36.855260    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:31:36.855268    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:31:36.855274    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:31:36.855280    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:31:36.855291    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:31:36.855305    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:31:36.855313    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:31:36.855321    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:31:36.855327    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:31:36.855334    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:31:36.855349    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:31:36.855360    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:31:38.857074    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 18
	I0926 18:31:38.857092    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:31:38.857175    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6551
	I0926 18:31:38.858210    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for 26:a4:1a:ce:79:b6 in /var/db/dhcpd_leases ...
	I0926 18:31:38.858284    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:31:38.858318    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:31:38.858328    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:31:38.858333    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:31:38.858351    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:31:38.858370    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:31:38.858378    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:31:38.858393    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:31:38.858413    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:31:38.858425    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:31:38.858432    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:31:38.858438    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:31:38.858444    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:31:38.858452    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:31:38.858459    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:31:38.858466    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:31:38.858473    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:31:38.858478    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:31:38.858486    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:31:40.860371    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 19
	I0926 18:31:40.860386    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:31:40.860453    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6551
	I0926 18:31:40.861239    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for 26:a4:1a:ce:79:b6 in /var/db/dhcpd_leases ...
	I0926 18:31:40.861291    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:31:40.861301    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:31:40.861309    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:31:40.861315    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:31:40.861335    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:31:40.861350    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:31:40.861357    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:31:40.861364    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:31:40.861371    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:31:40.861379    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:31:40.861387    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:31:40.861393    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:31:40.861405    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:31:40.861418    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:31:40.861427    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:31:40.861435    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:31:40.861442    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:31:40.861451    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:31:40.861466    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:31:42.863545    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 20
	I0926 18:31:42.863557    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:31:42.863620    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6551
	I0926 18:31:42.864408    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for 26:a4:1a:ce:79:b6 in /var/db/dhcpd_leases ...
	I0926 18:31:42.864474    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:31:42.864482    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:31:42.864491    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:31:42.864496    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:31:42.864502    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:31:42.864507    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:31:42.864558    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:31:42.864571    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:31:42.864590    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:31:42.864599    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:31:42.864606    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:31:42.864622    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:31:42.864634    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:31:42.864646    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:31:42.864652    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:31:42.864661    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:31:42.864668    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:31:42.864676    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:31:42.864683    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:31:44.865454    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 21
	I0926 18:31:44.865467    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:31:44.865530    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6551
	I0926 18:31:44.866379    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for 26:a4:1a:ce:79:b6 in /var/db/dhcpd_leases ...
	I0926 18:31:44.866432    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:31:44.866444    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:31:44.866464    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:31:44.866478    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:31:44.866488    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:31:44.866497    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:31:44.866503    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:31:44.866509    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:31:44.866515    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:31:44.866522    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:31:44.866530    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:31:44.866546    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:31:44.866562    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:31:44.866570    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:31:44.866575    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:31:44.866590    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:31:44.866601    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:31:44.866610    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:31:44.866622    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:31:46.868664    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 22
	I0926 18:31:46.868683    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:31:46.868734    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6551
	I0926 18:31:46.869533    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for 26:a4:1a:ce:79:b6 in /var/db/dhcpd_leases ...
	I0926 18:31:46.869579    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:31:46.869592    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:31:46.869608    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:31:46.869618    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:31:46.869632    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:31:46.869647    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:31:46.869658    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:31:46.869668    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:31:46.869674    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:31:46.869682    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:31:46.869697    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:31:46.869706    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:31:46.869717    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:31:46.869723    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:31:46.869734    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:31:46.869742    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:31:46.869748    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:31:46.869756    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:31:46.869764    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:31:48.869810    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 23
	I0926 18:31:48.869830    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:31:48.869921    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6551
	I0926 18:31:48.870785    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for 26:a4:1a:ce:79:b6 in /var/db/dhcpd_leases ...
	I0926 18:31:48.870849    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:31:48.870861    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:31:48.870872    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:31:48.870883    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:31:48.870894    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:31:48.870905    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:31:48.870914    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:31:48.870930    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:31:48.870955    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:31:48.870967    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:31:48.870976    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:31:48.870984    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:31:48.870996    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:31:48.871005    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:31:48.871013    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:31:48.871020    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:31:48.871027    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:31:48.871035    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:31:48.871048    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:31:50.871509    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 24
	I0926 18:31:50.871523    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:31:50.871593    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6551
	I0926 18:31:50.872413    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for 26:a4:1a:ce:79:b6 in /var/db/dhcpd_leases ...
	I0926 18:31:50.872473    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:31:50.872491    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:31:50.872501    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:31:50.872513    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:31:50.872536    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:31:50.872548    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:31:50.872555    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:31:50.872563    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:31:50.872570    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:31:50.872577    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:31:50.872583    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:31:50.872591    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:31:50.872598    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:31:50.872606    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:31:50.872618    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:31:50.872626    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:31:50.872632    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:31:50.872638    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:31:50.872658    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:31:52.873041    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 25
	I0926 18:31:52.873056    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:31:52.873143    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6551
	I0926 18:31:52.873946    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for 26:a4:1a:ce:79:b6 in /var/db/dhcpd_leases ...
	I0926 18:31:52.873990    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:31:52.873999    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:31:52.874011    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:31:52.874021    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:31:52.874036    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:31:52.874047    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:31:52.874056    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:31:52.874061    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:31:52.874067    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:31:52.874072    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:31:52.874086    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:31:52.874100    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:31:52.874107    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:31:52.874114    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:31:52.874122    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:31:52.874127    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:31:52.874133    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:31:52.874140    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:31:52.874159    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:31:54.876097    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 26
	I0926 18:31:54.876112    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:31:54.876168    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6551
	I0926 18:31:54.877053    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for 26:a4:1a:ce:79:b6 in /var/db/dhcpd_leases ...
	I0926 18:31:54.877075    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:31:54.877090    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:31:54.877099    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:31:54.877108    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:31:54.877124    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:31:54.877136    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:31:54.877143    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:31:54.877151    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:31:54.877158    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:31:54.877164    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:31:54.877169    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:31:54.877176    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:31:54.877183    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:31:54.877197    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:31:54.877207    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:31:54.877219    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:31:54.877231    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:31:54.877249    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:31:54.877257    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:31:56.878357    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 27
	I0926 18:31:56.878372    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:31:56.878438    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6551
	I0926 18:31:56.879278    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for 26:a4:1a:ce:79:b6 in /var/db/dhcpd_leases ...
	I0926 18:31:56.879340    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:31:56.879351    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:31:56.879358    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:31:56.879365    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:31:56.879390    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:31:56.879399    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:31:56.879406    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:31:56.879414    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:31:56.879426    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:31:56.879436    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:31:56.879443    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:31:56.879458    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:31:56.879475    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:31:56.879486    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:31:56.879494    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:31:56.879499    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:31:56.879508    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:31:56.879517    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:31:56.879534    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:31:58.879771    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 28
	I0926 18:31:58.879786    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:31:58.879842    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6551
	I0926 18:31:58.880656    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for 26:a4:1a:ce:79:b6 in /var/db/dhcpd_leases ...
	I0926 18:31:58.880720    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:31:58.880734    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:31:58.880743    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:31:58.880750    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:31:58.880757    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:31:58.880763    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:31:58.880790    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:31:58.880803    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:31:58.880815    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:31:58.880824    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:31:58.880834    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:31:58.880842    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:31:58.880858    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:31:58.880886    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:31:58.880920    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:31:58.880927    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:31:58.880933    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:31:58.880940    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:31:58.880948    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:32:00.881177    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 29
	I0926 18:32:00.881192    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:32:00.881270    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6551
	I0926 18:32:00.882118    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for 26:a4:1a:ce:79:b6 in /var/db/dhcpd_leases ...
	I0926 18:32:00.882172    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:32:00.882183    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:32:00.882194    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:32:00.882200    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:32:00.882206    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:32:00.882219    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:32:00.882236    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:32:00.882251    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:32:00.882265    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:32:00.882276    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:32:00.882285    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:32:00.882293    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:32:00.882309    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:32:00.882324    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:32:00.882335    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:32:00.882350    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:32:00.882366    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:32:00.882380    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:32:00.882397    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:32:02.884420    6531 client.go:171] duration metric: took 1m0.967480695s to LocalClient.Create
	I0926 18:32:04.885542    6531 start.go:128] duration metric: took 1m3.002147512s to createHost
	I0926 18:32:04.885562    6531 start.go:83] releasing machines lock for "docker-flags-309000", held for 1m3.002288989s
	W0926 18:32:04.885586    6531 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 26:a4:1a:ce:79:b6
	I0926 18:32:04.885937    6531 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 18:32:04.885956    6531 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 18:32:04.895227    6531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53872
	I0926 18:32:04.895751    6531 main.go:141] libmachine: () Calling .GetVersion
	I0926 18:32:04.896199    6531 main.go:141] libmachine: Using API Version  1
	I0926 18:32:04.896216    6531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 18:32:04.896521    6531 main.go:141] libmachine: () Calling .GetMachineName
	I0926 18:32:04.896995    6531 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 18:32:04.897033    6531 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 18:32:04.905860    6531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53874
	I0926 18:32:04.906237    6531 main.go:141] libmachine: () Calling .GetVersion
	I0926 18:32:04.906720    6531 main.go:141] libmachine: Using API Version  1
	I0926 18:32:04.906737    6531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 18:32:04.906993    6531 main.go:141] libmachine: () Calling .GetMachineName
	I0926 18:32:04.907120    6531 main.go:141] libmachine: (docker-flags-309000) Calling .GetState
	I0926 18:32:04.907233    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:32:04.907310    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6551
	I0926 18:32:04.908326    6531 main.go:141] libmachine: (docker-flags-309000) Calling .DriverName
	I0926 18:32:04.929955    6531 out.go:177] * Deleting "docker-flags-309000" in hyperkit ...
	I0926 18:32:04.971723    6531 main.go:141] libmachine: (docker-flags-309000) Calling .Remove
	I0926 18:32:04.971843    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:32:04.971851    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:32:04.971921    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6551
	I0926 18:32:04.972925    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:32:04.972972    6531 main.go:141] libmachine: (docker-flags-309000) DBG | waiting for graceful shutdown
	I0926 18:32:05.973339    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:32:05.973490    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6551
	I0926 18:32:05.974470    6531 main.go:141] libmachine: (docker-flags-309000) DBG | waiting for graceful shutdown
	I0926 18:32:06.975075    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:32:06.975147    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6551
	I0926 18:32:06.976817    6531 main.go:141] libmachine: (docker-flags-309000) DBG | waiting for graceful shutdown
	I0926 18:32:07.977187    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:32:07.977272    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6551
	I0926 18:32:07.977869    6531 main.go:141] libmachine: (docker-flags-309000) DBG | waiting for graceful shutdown
	I0926 18:32:08.978486    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:32:08.978566    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6551
	I0926 18:32:08.979298    6531 main.go:141] libmachine: (docker-flags-309000) DBG | waiting for graceful shutdown
	I0926 18:32:09.981023    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:32:09.981109    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6551
	I0926 18:32:09.982078    6531 main.go:141] libmachine: (docker-flags-309000) DBG | sending sigkill
	I0926 18:32:09.982089    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:32:09.992791    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:32:09 WARN : hyperkit: failed to read stdout: EOF
	I0926 18:32:09.992813    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:32:09 WARN : hyperkit: failed to read stderr: EOF
	W0926 18:32:10.010735    6531 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 26:a4:1a:ce:79:b6
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 26:a4:1a:ce:79:b6
	I0926 18:32:10.010757    6531 start.go:729] Will try again in 5 seconds ...
	I0926 18:32:15.012869    6531 start.go:360] acquireMachinesLock for docker-flags-309000: {Name:mk62b0ec9b6170d1971239573141a625c4a2621e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:33:07.945421    6531 start.go:364] duration metric: took 52.932023026s to acquireMachinesLock for "docker-flags-309000"
	I0926 18:33:07.945447    6531 start.go:93] Provisioning new machine with config: &{Name:docker-flags-309000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSH
Key: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-309000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 18:33:07.945526    6531 start.go:125] createHost starting for "" (driver="hyperkit")
	I0926 18:33:07.987743    6531 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0926 18:33:07.987832    6531 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 18:33:07.987846    6531 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 18:33:07.996569    6531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53878
	I0926 18:33:07.996956    6531 main.go:141] libmachine: () Calling .GetVersion
	I0926 18:33:07.997340    6531 main.go:141] libmachine: Using API Version  1
	I0926 18:33:07.997354    6531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 18:33:07.997572    6531 main.go:141] libmachine: () Calling .GetMachineName
	I0926 18:33:07.997693    6531 main.go:141] libmachine: (docker-flags-309000) Calling .GetMachineName
	I0926 18:33:07.997798    6531 main.go:141] libmachine: (docker-flags-309000) Calling .DriverName
	I0926 18:33:07.997904    6531 start.go:159] libmachine.API.Create for "docker-flags-309000" (driver="hyperkit")
	I0926 18:33:07.997920    6531 client.go:168] LocalClient.Create starting
	I0926 18:33:07.997946    6531 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem
	I0926 18:33:07.998000    6531 main.go:141] libmachine: Decoding PEM data...
	I0926 18:33:07.998010    6531 main.go:141] libmachine: Parsing certificate...
	I0926 18:33:07.998045    6531 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem
	I0926 18:33:07.998082    6531 main.go:141] libmachine: Decoding PEM data...
	I0926 18:33:07.998102    6531 main.go:141] libmachine: Parsing certificate...
	I0926 18:33:07.998131    6531 main.go:141] libmachine: Running pre-create checks...
	I0926 18:33:07.998137    6531 main.go:141] libmachine: (docker-flags-309000) Calling .PreCreateCheck
	I0926 18:33:07.998216    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:33:07.998250    6531 main.go:141] libmachine: (docker-flags-309000) Calling .GetConfigRaw
	I0926 18:33:08.008671    6531 main.go:141] libmachine: Creating machine...
	I0926 18:33:08.008679    6531 main.go:141] libmachine: (docker-flags-309000) Calling .Create
	I0926 18:33:08.008766    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:33:08.008915    6531 main.go:141] libmachine: (docker-flags-309000) DBG | I0926 18:33:08.008760    6585 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19711-1128/.minikube
	I0926 18:33:08.008975    6531 main.go:141] libmachine: (docker-flags-309000) Downloading /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1128/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0926 18:33:08.432182    6531 main.go:141] libmachine: (docker-flags-309000) DBG | I0926 18:33:08.432073    6585 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000/id_rsa...
	I0926 18:33:08.624733    6531 main.go:141] libmachine: (docker-flags-309000) DBG | I0926 18:33:08.624681    6585 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000/docker-flags-309000.rawdisk...
	I0926 18:33:08.624749    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Writing magic tar header
	I0926 18:33:08.624764    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Writing SSH key tar header
	I0926 18:33:08.625129    6531 main.go:141] libmachine: (docker-flags-309000) DBG | I0926 18:33:08.625092    6585 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000 ...
	I0926 18:33:08.989886    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:33:08.989915    6531 main.go:141] libmachine: (docker-flags-309000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000/hyperkit.pid
	I0926 18:33:08.989936    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Using UUID ec519978-acff-4bb0-a8d5-02bb56ca2b12
	I0926 18:33:09.015074    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Generated MAC ea:53:a9:6f:41:64
	I0926 18:33:09.015094    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-309000
	I0926 18:33:09.015123    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:33:09 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ec519978-acff-4bb0-a8d5-02bb56ca2b12", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0926 18:33:09.015155    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:33:09 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ec519978-acff-4bb0-a8d5-02bb56ca2b12", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0926 18:33:09.015212    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:33:09 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "ec519978-acff-4bb0-a8d5-02bb56ca2b12", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000/docker-flags-309000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000/bzimage,/Users/jenkins/m
inikube-integration/19711-1128/.minikube/machines/docker-flags-309000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-309000"}
	I0926 18:33:09.015252    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:33:09 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U ec519978-acff-4bb0-a8d5-02bb56ca2b12 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000/docker-flags-309000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000/console-ring -f kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000/bzimage,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags
-309000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-309000"
	I0926 18:33:09.015273    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:33:09 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0926 18:33:09.018215    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:33:09 DEBUG: hyperkit: Pid is 6600
	I0926 18:33:09.018780    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 0
	I0926 18:33:09.018794    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:33:09.018891    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6600
	I0926 18:33:09.020021    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for ea:53:a9:6f:41:64 in /var/db/dhcpd_leases ...
	I0926 18:33:09.020047    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:33:09.020061    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:33:09.020077    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:33:09.020091    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:33:09.020102    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:33:09.020118    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:33:09.020132    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:33:09.020143    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:33:09.020155    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:33:09.020168    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:33:09.020180    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:33:09.020190    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:33:09.020200    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:33:09.020218    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:33:09.020233    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:33:09.020245    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:33:09.020257    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:33:09.020268    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:33:09.020281    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:33:09.026083    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:33:09 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I0926 18:33:09.034103    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:33:09 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/docker-flags-309000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0926 18:33:09.034862    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:33:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 18:33:09.034888    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:33:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 18:33:09.034907    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:33:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 18:33:09.034920    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:33:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 18:33:09.412413    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:33:09 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0926 18:33:09.412429    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:33:09 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0926 18:33:09.527317    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:33:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 18:33:09.527345    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:33:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 18:33:09.527362    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:33:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 18:33:09.527372    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:33:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 18:33:09.528213    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:33:09 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0926 18:33:09.528234    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:33:09 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0926 18:33:11.021465    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 1
	I0926 18:33:11.021488    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:33:11.021567    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6600
	I0926 18:33:11.022399    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for ea:53:a9:6f:41:64 in /var/db/dhcpd_leases ...
	I0926 18:33:11.022425    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:33:11.022444    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:33:11.022457    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:33:11.022482    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:33:11.022507    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:33:11.022520    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:33:11.022530    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:33:11.022539    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:33:11.022546    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:33:11.022574    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:33:11.022587    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:33:11.022594    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:33:11.022614    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:33:11.022633    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:33:11.022642    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:33:11.022654    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:33:11.022664    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:33:11.022671    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:33:11.022696    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:33:13.023455    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 2
	I0926 18:33:13.023471    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:33:13.023551    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6600
	I0926 18:33:13.024578    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for ea:53:a9:6f:41:64 in /var/db/dhcpd_leases ...
	I0926 18:33:13.024590    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:33:13.024615    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:33:13.024624    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:33:13.024634    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:33:13.024642    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:33:13.024653    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:33:13.024661    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:33:13.024680    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:33:13.024694    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:33:13.024702    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:33:13.024710    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:33:13.024720    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:33:13.024727    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:33:13.024734    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:33:13.024741    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:33:13.024749    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:33:13.024757    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:33:13.024770    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:33:13.024780    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:33:14.979374    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:33:14 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0926 18:33:14.979504    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:33:14 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0926 18:33:14.979512    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:33:14 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0926 18:33:14.999397    6531 main.go:141] libmachine: (docker-flags-309000) DBG | 2024/09/26 18:33:14 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0926 18:33:15.025570    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 3
	I0926 18:33:15.025597    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:33:15.025829    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6600
	I0926 18:33:15.027180    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for ea:53:a9:6f:41:64 in /var/db/dhcpd_leases ...
	I0926 18:33:15.027274    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:33:15.027287    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:33:15.027297    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:33:15.027305    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:33:15.027330    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:33:15.027348    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:33:15.027365    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:33:15.027391    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:33:15.027408    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:33:15.027417    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:33:15.027428    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:33:15.027438    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:33:15.027447    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:33:15.027470    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:33:15.027488    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:33:15.027498    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:33:15.027510    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:33:15.027524    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:33:15.027533    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:33:17.029121    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 4
	I0926 18:33:17.029139    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:33:17.029230    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6600
	I0926 18:33:17.030012    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for ea:53:a9:6f:41:64 in /var/db/dhcpd_leases ...
	I0926 18:33:17.030068    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:33:17.030079    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:33:17.030103    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:33:17.030118    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:33:17.030126    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:33:17.030132    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:33:17.030139    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:33:17.030155    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:33:17.030166    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:33:17.030180    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:33:17.030191    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:33:17.030204    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:33:17.030213    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:33:17.030220    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:33:17.030227    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:33:17.030233    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:33:17.030241    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:33:17.030248    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:33:17.030255    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:33:19.030658    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 5
	I0926 18:33:19.030672    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:33:19.030759    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6600
	I0926 18:33:19.031546    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for ea:53:a9:6f:41:64 in /var/db/dhcpd_leases ...
	I0926 18:33:19.031564    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:33:19.031581    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:33:19.031591    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:33:19.031599    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:33:19.031614    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:33:19.031628    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:33:19.031636    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:33:19.031643    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:33:19.031648    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:33:19.031660    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:33:19.031675    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:33:19.031691    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:33:19.031701    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:33:19.031722    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:33:19.031741    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:33:19.031749    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:33:19.031757    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:33:19.031769    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:33:19.031781    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:33:21.033798    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 6
	I0926 18:33:21.033816    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:33:21.033858    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6600
	I0926 18:33:21.034786    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for ea:53:a9:6f:41:64 in /var/db/dhcpd_leases ...
	I0926 18:33:21.034826    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:33:21.034838    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:33:21.034851    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:33:21.034861    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:33:21.034872    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:33:21.034880    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:33:21.034894    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:33:21.034903    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:33:21.034910    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:33:21.034919    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:33:21.034932    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:33:21.034942    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:33:21.034949    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:33:21.034954    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:33:21.034961    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:33:21.034968    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:33:21.034974    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:33:21.034981    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:33:21.034997    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:33:23.035018    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 7
	I0926 18:33:23.035029    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:33:23.035089    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6600
	I0926 18:33:23.035870    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for ea:53:a9:6f:41:64 in /var/db/dhcpd_leases ...
	I0926 18:33:23.035918    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:33:23.035930    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:33:23.035941    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:33:23.035946    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:33:23.035958    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:33:23.035967    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:33:23.035998    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:33:23.036010    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:33:23.036033    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:33:23.036064    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:33:23.036070    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:33:23.036075    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:33:23.036087    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:33:23.036106    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:33:23.036118    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:33:23.036126    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:33:23.036136    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:33:23.036144    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:33:23.036159    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:33:25.037017    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 8
	I0926 18:33:25.037029    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:33:25.037114    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6600
	I0926 18:33:25.037929    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for ea:53:a9:6f:41:64 in /var/db/dhcpd_leases ...
	I0926 18:33:25.037989    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:33:25.037999    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:33:25.038008    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:33:25.038015    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:33:25.038021    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:33:25.038027    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:33:25.038048    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:33:25.038064    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:33:25.038074    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:33:25.038082    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:33:25.038106    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:33:25.038120    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:33:25.038128    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:33:25.038139    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:33:25.038155    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:33:25.038167    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:33:25.038182    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:33:25.038191    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:33:25.038198    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:33:27.039494    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 9
	I0926 18:33:27.039505    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:33:27.039570    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6600
	I0926 18:33:27.040348    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for ea:53:a9:6f:41:64 in /var/db/dhcpd_leases ...
	I0926 18:33:27.040405    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:33:27.040417    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:33:27.040426    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:33:27.040432    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:33:27.040438    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:33:27.040445    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:33:27.040464    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:33:27.040471    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:33:27.040478    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:33:27.040493    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:33:27.040500    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:33:27.040507    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:33:27.040529    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:33:27.040543    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:33:27.040567    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:33:27.040599    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:33:27.040611    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:33:27.040623    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:33:27.040632    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:33:29.041280    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 10
	I0926 18:33:29.041293    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:33:29.041370    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6600
	I0926 18:33:29.042136    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for ea:53:a9:6f:41:64 in /var/db/dhcpd_leases ...
	I0926 18:33:29.042194    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:33:29.042207    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:33:29.042215    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:33:29.042221    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:33:29.042236    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:33:29.042260    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:33:29.042269    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:33:29.042278    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:33:29.042284    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:33:29.042290    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:33:29.042309    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:33:29.042321    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:33:29.042333    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:33:29.042341    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:33:29.042347    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:33:29.042355    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:33:29.042362    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:33:29.042367    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:33:29.042375    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:33:31.044356    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 11
	I0926 18:33:31.044369    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:33:31.044413    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6600
	I0926 18:33:31.045413    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for ea:53:a9:6f:41:64 in /var/db/dhcpd_leases ...
	I0926 18:33:31.045464    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:33:31.045478    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:33:31.045496    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:33:31.045505    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:33:31.045512    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:33:31.045518    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:33:31.045525    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:33:31.045531    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:33:31.045552    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:33:31.045563    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:33:31.045572    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:33:31.045578    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:33:31.045584    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:33:31.045597    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:33:31.045604    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:33:31.045610    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:33:31.045617    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:33:31.045636    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:33:31.045649    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:33:33.047772    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 12
	I0926 18:33:33.047785    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:33:33.047858    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6600
	I0926 18:33:33.048754    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for ea:53:a9:6f:41:64 in /var/db/dhcpd_leases ...
	I0926 18:33:33.048802    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:33:33.048810    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:33:33.048821    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:33:33.048828    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:33:33.048843    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:33:33.048853    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:33:33.048864    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:33:33.048872    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:33:33.048879    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:33:33.048885    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:33:33.048907    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:33:33.048918    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:33:33.048938    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:33:33.048951    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:33:33.048959    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:33:33.048966    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:33:33.048990    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:33:33.049004    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:33:33.049013    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:33:35.050903    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 13
	I0926 18:33:35.050915    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:33:35.051013    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6600
	I0926 18:33:35.051795    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for ea:53:a9:6f:41:64 in /var/db/dhcpd_leases ...
	I0926 18:33:35.051844    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:33:35.051854    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:33:35.051865    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:33:35.051875    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:33:35.051882    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:33:35.051889    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:33:35.051895    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:33:35.051903    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:33:35.051923    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:33:35.051931    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:33:35.051940    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:33:35.051948    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:33:35.051958    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:33:35.051967    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:33:35.051973    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:33:35.051981    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:33:35.051989    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:33:35.051996    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:33:35.052004    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:33:37.052095    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 14
	I0926 18:33:37.052108    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:33:37.052185    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6600
	I0926 18:33:37.053201    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for ea:53:a9:6f:41:64 in /var/db/dhcpd_leases ...
	I0926 18:33:37.053239    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:33:37.053252    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:33:37.053263    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:33:37.053277    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:33:37.053287    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:33:37.053294    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:33:37.053300    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:33:37.053308    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:33:37.053319    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:33:37.053328    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:33:37.053336    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:33:37.053343    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:33:37.053352    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:33:37.053361    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:33:37.053367    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:33:37.053381    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:33:37.053387    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:33:37.053393    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:33:37.053401    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:33:39.053763    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 15
	I0926 18:33:39.053776    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:33:39.053877    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6600
	I0926 18:33:39.054674    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for ea:53:a9:6f:41:64 in /var/db/dhcpd_leases ...
	I0926 18:33:39.054731    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:33:39.054744    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:33:39.054753    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:33:39.054758    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:33:39.054785    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:33:39.054797    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:33:39.054806    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:33:39.054814    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:33:39.054828    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:33:39.054842    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:33:39.054852    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:33:39.054859    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:33:39.054874    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:33:39.054883    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:33:39.054889    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:33:39.054896    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:33:39.054901    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:33:39.054907    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:33:39.054915    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:33:41.055081    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 16
	I0926 18:33:41.055098    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:33:41.055168    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6600
	I0926 18:33:41.055974    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for ea:53:a9:6f:41:64 in /var/db/dhcpd_leases ...
	I0926 18:33:41.056016    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:33:41.056030    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:33:41.056046    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:33:41.056059    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:33:41.056066    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:33:41.056074    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:33:41.056080    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:33:41.056088    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:33:41.056095    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:33:41.056101    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:33:41.056106    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:33:41.056113    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:33:41.056125    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:33:41.056137    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:33:41.056152    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:33:41.056163    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:33:41.056185    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:33:41.056195    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:33:41.056213    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:33:43.057459    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 17
	I0926 18:33:43.057470    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:33:43.057516    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6600
	I0926 18:33:43.058312    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for ea:53:a9:6f:41:64 in /var/db/dhcpd_leases ...
	I0926 18:33:43.058364    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:33:43.058377    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:33:43.058388    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:33:43.058401    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:33:43.058408    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:33:43.058414    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:33:43.058420    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:33:43.058426    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:33:43.058438    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:33:43.058444    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:33:43.058450    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:33:43.058458    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:33:43.058464    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:33:43.058470    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:33:43.058485    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:33:43.058495    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:33:43.058503    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:33:43.058510    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:33:43.058518    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:33:45.058644    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 18
	I0926 18:33:45.058658    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:33:45.058741    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6600
	I0926 18:33:45.059534    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for ea:53:a9:6f:41:64 in /var/db/dhcpd_leases ...
	I0926 18:33:45.059577    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:33:45.059586    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:33:45.059597    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:33:45.059607    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:33:45.059613    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:33:45.059620    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:33:45.059628    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:33:45.059634    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:33:45.059641    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:33:45.059650    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:33:45.059658    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:33:45.059668    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:33:45.059676    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:33:45.059682    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:33:45.059689    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:33:45.059696    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:33:45.059703    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:33:45.059710    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:33:45.059716    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:33:47.060923    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 19
	I0926 18:33:47.060937    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:33:47.061047    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6600
	I0926 18:33:47.061852    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for ea:53:a9:6f:41:64 in /var/db/dhcpd_leases ...
	I0926 18:33:47.061887    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:33:47.061894    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:33:47.061919    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:33:47.061945    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:33:47.061957    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:33:47.061966    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:33:47.061972    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:33:47.061982    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:33:47.062006    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:33:47.062015    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:33:47.062023    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:33:47.062031    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:33:47.062037    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:33:47.062043    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:33:47.062064    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:33:47.062078    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:33:47.062090    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:33:47.062096    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:33:47.062104    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:33:49.062785    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 20
	I0926 18:33:49.062811    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:33:49.062867    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6600
	I0926 18:33:49.063782    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for ea:53:a9:6f:41:64 in /var/db/dhcpd_leases ...
	I0926 18:33:49.063846    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:33:49.063857    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:33:49.063886    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:33:49.063895    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:33:49.063901    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:33:49.063907    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:33:49.063920    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:33:49.063937    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:33:49.063950    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:33:49.063957    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:33:49.063972    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:33:49.063985    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:33:49.063993    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:33:49.063999    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:33:49.064008    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:33:49.064015    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:33:49.064022    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:33:49.064030    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:33:49.064042    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:33:51.066078    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 21
	I0926 18:33:51.066094    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:33:51.066168    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6600
	I0926 18:33:51.066993    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for ea:53:a9:6f:41:64 in /var/db/dhcpd_leases ...
	I0926 18:33:51.067037    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:33:51.067044    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:33:51.067052    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:33:51.067057    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:33:51.067081    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:33:51.067096    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:33:51.067105    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:33:51.067113    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:33:51.067120    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:33:51.067128    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:33:51.067136    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:33:51.067144    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:33:51.067155    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:33:51.067164    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:33:51.067173    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:33:51.067181    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:33:51.067188    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:33:51.067194    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:33:51.067202    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:33:53.067342    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 22
	I0926 18:33:53.067356    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:33:53.067422    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6600
	I0926 18:33:53.068229    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for ea:53:a9:6f:41:64 in /var/db/dhcpd_leases ...
	I0926 18:33:53.068269    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:33:53.068276    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:33:53.068310    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:33:53.068320    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:33:53.068330    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:33:53.068336    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:33:53.068355    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:33:53.068376    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:33:53.068384    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:33:53.068402    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:33:53.068415    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:33:53.068424    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:33:53.068433    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:33:53.068440    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:33:53.068447    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:33:53.068454    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:33:53.068460    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:33:53.068465    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:33:53.068473    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:33:55.070506    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 23
	I0926 18:33:55.070518    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:33:55.070584    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6600
	I0926 18:33:55.071461    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for ea:53:a9:6f:41:64 in /var/db/dhcpd_leases ...
	I0926 18:33:55.071511    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:33:55.071521    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:33:55.071538    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:33:55.071546    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:33:55.071557    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:33:55.071569    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:33:55.071576    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:33:55.071583    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:33:55.071597    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:33:55.071609    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:33:55.071625    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:33:55.071636    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:33:55.071645    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:33:55.071654    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:33:55.071661    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:33:55.071677    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:33:55.071689    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:33:55.071698    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:33:55.071703    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:33:57.073720    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 24
	I0926 18:33:57.073734    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:33:57.073788    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6600
	I0926 18:33:57.074645    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for ea:53:a9:6f:41:64 in /var/db/dhcpd_leases ...
	I0926 18:33:57.074678    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:33:57.074688    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:33:57.074720    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:33:57.074730    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:33:57.074737    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:33:57.074743    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:33:57.074749    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:33:57.074755    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:33:57.074768    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:33:57.074779    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:33:57.074797    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:33:57.074804    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:33:57.074819    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:33:57.074832    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:33:57.074840    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:33:57.074851    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:33:57.074861    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:33:57.074869    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:33:57.074880    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:33:59.076105    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 25
	I0926 18:33:59.076116    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:33:59.076198    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6600
	I0926 18:33:59.077006    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for ea:53:a9:6f:41:64 in /var/db/dhcpd_leases ...
	I0926 18:33:59.077047    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:33:59.077068    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:33:59.077118    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:33:59.077146    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:33:59.077157    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:33:59.077165    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:33:59.077172    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:33:59.077179    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:33:59.077185    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:33:59.077192    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:33:59.077199    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:33:59.077206    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:33:59.077213    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:33:59.077219    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:33:59.077233    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:33:59.077251    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:33:59.077259    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:33:59.077266    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:33:59.077284    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:34:01.079296    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 26
	I0926 18:34:01.079316    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:34:01.079374    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6600
	I0926 18:34:01.080219    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for ea:53:a9:6f:41:64 in /var/db/dhcpd_leases ...
	I0926 18:34:01.080269    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:34:01.080279    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:34:01.080287    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:34:01.080295    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:34:01.080303    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:34:01.080314    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:34:01.080321    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:34:01.080327    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:34:01.080343    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:34:01.080355    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:34:01.080363    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:34:01.080369    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:34:01.080375    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:34:01.080389    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:34:01.080401    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:34:01.080409    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:34:01.080414    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:34:01.080431    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:34:01.080446    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:34:03.082429    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 27
	I0926 18:34:03.082445    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:34:03.082525    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6600
	I0926 18:34:03.083352    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for ea:53:a9:6f:41:64 in /var/db/dhcpd_leases ...
	I0926 18:34:03.083415    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:34:03.083426    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:34:03.083435    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:34:03.083442    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:34:03.083449    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:34:03.083456    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:34:03.083465    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:34:03.083473    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:34:03.083493    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:34:03.083507    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:34:03.083521    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:34:03.083530    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:34:03.083538    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:34:03.083552    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:34:03.083561    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:34:03.083571    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:34:03.083579    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:34:03.083596    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:34:03.083607    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:34:05.083848    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 28
	I0926 18:34:05.083870    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:34:05.083927    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6600
	I0926 18:34:05.084926    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for ea:53:a9:6f:41:64 in /var/db/dhcpd_leases ...
	I0926 18:34:05.084980    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:34:05.084990    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:34:05.085000    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:34:05.085005    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:34:05.085023    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:34:05.085046    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:34:05.085056    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:34:05.085072    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:34:05.085085    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:34:05.085093    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:34:05.085101    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:34:05.085108    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:34:05.085115    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:34:05.085122    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:34:05.085130    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:34:05.085138    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:34:05.085144    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:34:05.085150    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:34:05.085158    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:34:07.087140    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Attempt 29
	I0926 18:34:07.087156    6531 main.go:141] libmachine: (docker-flags-309000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:34:07.087186    6531 main.go:141] libmachine: (docker-flags-309000) DBG | hyperkit pid from json: 6600
	I0926 18:34:07.088011    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Searching for ea:53:a9:6f:41:64 in /var/db/dhcpd_leases ...
	I0926 18:34:07.088024    6531 main.go:141] libmachine: (docker-flags-309000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:34:07.088033    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:34:07.088040    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:34:07.088046    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:34:07.088053    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:34:07.088062    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:34:07.088068    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:34:07.088080    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:34:07.088094    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:34:07.088102    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:34:07.088109    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:34:07.088116    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:34:07.088141    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:34:07.088149    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:34:07.088156    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:34:07.088162    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:34:07.088177    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:34:07.088190    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:34:07.088205    6531 main.go:141] libmachine: (docker-flags-309000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:34:09.089089    6531 client.go:171] duration metric: took 1m1.09060765s to LocalClient.Create
	I0926 18:34:11.091131    6531 start.go:128] duration metric: took 1m3.145023625s to createHost
	I0926 18:34:11.091175    6531 start.go:83] releasing machines lock for "docker-flags-309000", held for 1m3.145170458s
	W0926 18:34:11.091269    6531 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p docker-flags-309000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ea:53:a9:6f:41:64
	* Failed to start hyperkit VM. Running "minikube delete -p docker-flags-309000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ea:53:a9:6f:41:64
	I0926 18:34:11.154611    6531 out.go:201] 
	W0926 18:34:11.175674    6531 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ea:53:a9:6f:41:64
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ea:53:a9:6f:41:64
	W0926 18:34:11.175693    6531 out.go:270] * 
	* 
	W0926 18:34:11.176452    6531 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 18:34:11.238678    6531 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-amd64 start -p docker-flags-309000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-309000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-309000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 50 (179.034415ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node docker-flags-309000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-309000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 50
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-309000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-309000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 50 (166.665042ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node docker-flags-309000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-309000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 50
docker_test.go:73: expected "out/minikube-darwin-amd64 -p docker-flags-309000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "\n\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-09-26 18:34:11.692558 -0700 PDT m=+4814.424613589
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-309000 -n docker-flags-309000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-309000 -n docker-flags-309000: exit status 7 (85.612403ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0926 18:34:11.776052    6630 status.go:386] failed to get driver ip: getting IP: IP address is not set
	E0926 18:34:11.776074    6630 status.go:119] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-309000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "docker-flags-309000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-309000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-309000: (5.247140762s)
--- FAIL: TestDockerFlags (252.41s)

                                                
                                    
x
+
TestForceSystemdFlag (252.3s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-396000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-flag-396000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (4m6.694347568s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-396000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1128/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1128/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "force-systemd-flag-396000" primary control-plane node in "force-systemd-flag-396000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "force-systemd-flag-396000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 18:29:01.494149    6185 out.go:345] Setting OutFile to fd 1 ...
	I0926 18:29:01.494414    6185 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:29:01.494419    6185 out.go:358] Setting ErrFile to fd 2...
	I0926 18:29:01.494423    6185 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:29:01.494601    6185 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1128/.minikube/bin
	I0926 18:29:01.496137    6185 out.go:352] Setting JSON to false
	I0926 18:29:01.518748    6185 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":5311,"bootTime":1727395230,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0926 18:29:01.518894    6185 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 18:29:01.539330    6185 out.go:177] * [force-systemd-flag-396000] minikube v1.34.0 on Darwin 14.6.1
	I0926 18:29:01.582134    6185 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 18:29:01.582215    6185 notify.go:220] Checking for updates...
	I0926 18:29:01.623798    6185 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1128/kubeconfig
	I0926 18:29:01.666927    6185 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0926 18:29:01.741960    6185 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 18:29:01.799974    6185 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1128/.minikube
	I0926 18:29:01.842876    6185 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 18:29:01.864368    6185 config.go:182] Loaded profile config "force-systemd-env-761000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 18:29:01.864460    6185 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 18:29:01.892887    6185 out.go:177] * Using the hyperkit driver based on user configuration
	I0926 18:29:01.933903    6185 start.go:297] selected driver: hyperkit
	I0926 18:29:01.933919    6185 start.go:901] validating driver "hyperkit" against <nil>
	I0926 18:29:01.933929    6185 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 18:29:01.936893    6185 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:29:01.937017    6185 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19711-1128/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0926 18:29:01.945405    6185 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0926 18:29:01.949337    6185 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 18:29:01.949357    6185 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0926 18:29:01.949384    6185 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0926 18:29:01.949624    6185 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0926 18:29:01.949649    6185 cni.go:84] Creating CNI manager for ""
	I0926 18:29:01.949690    6185 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 18:29:01.949698    6185 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0926 18:29:01.949756    6185 start.go:340] cluster config:
	{Name:force-systemd-flag-396000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-396000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 18:29:01.949845    6185 iso.go:125] acquiring lock: {Name:mka8a9c5a237c1e4ae233281d2ff7965d13a843d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:29:01.970908    6185 out.go:177] * Starting "force-systemd-flag-396000" primary control-plane node in "force-systemd-flag-396000" cluster
	I0926 18:29:01.991767    6185 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 18:29:01.991800    6185 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0926 18:29:01.991814    6185 cache.go:56] Caching tarball of preloaded images
	I0926 18:29:01.991912    6185 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0926 18:29:01.991926    6185 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 18:29:01.991998    6185 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/force-systemd-flag-396000/config.json ...
	I0926 18:29:01.992018    6185 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/force-systemd-flag-396000/config.json: {Name:mk4231747f3fefdfe44c05965555961e6e9aee01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 18:29:01.992352    6185 start.go:360] acquireMachinesLock for force-systemd-flag-396000: {Name:mk62b0ec9b6170d1971239573141a625c4a2621e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:29:58.821990    6185 start.go:364] duration metric: took 56.829099123s to acquireMachinesLock for "force-systemd-flag-396000"
	I0926 18:29:58.822053    6185 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-396000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-396000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 18:29:58.822122    6185 start.go:125] createHost starting for "" (driver="hyperkit")
	I0926 18:29:58.864456    6185 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0926 18:29:58.864644    6185 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 18:29:58.864678    6185 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 18:29:58.873450    6185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53845
	I0926 18:29:58.874012    6185 main.go:141] libmachine: () Calling .GetVersion
	I0926 18:29:58.874546    6185 main.go:141] libmachine: Using API Version  1
	I0926 18:29:58.874556    6185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 18:29:58.874964    6185 main.go:141] libmachine: () Calling .GetMachineName
	I0926 18:29:58.875124    6185 main.go:141] libmachine: (force-systemd-flag-396000) Calling .GetMachineName
	I0926 18:29:58.875223    6185 main.go:141] libmachine: (force-systemd-flag-396000) Calling .DriverName
	I0926 18:29:58.875335    6185 start.go:159] libmachine.API.Create for "force-systemd-flag-396000" (driver="hyperkit")
	I0926 18:29:58.875360    6185 client.go:168] LocalClient.Create starting
	I0926 18:29:58.875398    6185 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem
	I0926 18:29:58.875454    6185 main.go:141] libmachine: Decoding PEM data...
	I0926 18:29:58.875469    6185 main.go:141] libmachine: Parsing certificate...
	I0926 18:29:58.875532    6185 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem
	I0926 18:29:58.875569    6185 main.go:141] libmachine: Decoding PEM data...
	I0926 18:29:58.875579    6185 main.go:141] libmachine: Parsing certificate...
	I0926 18:29:58.875592    6185 main.go:141] libmachine: Running pre-create checks...
	I0926 18:29:58.875600    6185 main.go:141] libmachine: (force-systemd-flag-396000) Calling .PreCreateCheck
	I0926 18:29:58.875679    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:29:58.875843    6185 main.go:141] libmachine: (force-systemd-flag-396000) Calling .GetConfigRaw
	I0926 18:29:58.885553    6185 main.go:141] libmachine: Creating machine...
	I0926 18:29:58.885563    6185 main.go:141] libmachine: (force-systemd-flag-396000) Calling .Create
	I0926 18:29:58.885663    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:29:58.885805    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | I0926 18:29:58.885656    6209 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19711-1128/.minikube
	I0926 18:29:58.885840    6185 main.go:141] libmachine: (force-systemd-flag-396000) Downloading /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1128/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0926 18:29:59.307179    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | I0926 18:29:59.307099    6209 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000/id_rsa...
	I0926 18:29:59.414089    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | I0926 18:29:59.413990    6209 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000/force-systemd-flag-396000.rawdisk...
	I0926 18:29:59.414104    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Writing magic tar header
	I0926 18:29:59.414115    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Writing SSH key tar header
	I0926 18:29:59.414441    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | I0926 18:29:59.414405    6209 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000 ...
	I0926 18:29:59.781402    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:29:59.781420    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000/hyperkit.pid
	I0926 18:29:59.781473    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Using UUID 9a61d1ec-a5f9-40ca-a848-f79d5b59e4a0
	I0926 18:29:59.805622    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Generated MAC 1e:11:eb:17:9c:53
	I0926 18:29:59.805639    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-396000
	I0926 18:29:59.805667    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:29:59 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9a61d1ec-a5f9-40ca-a848-f79d5b59e4a0", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0926 18:29:59.805697    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:29:59 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9a61d1ec-a5f9-40ca-a848-f79d5b59e4a0", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0926 18:29:59.805757    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:29:59 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "9a61d1ec-a5f9-40ca-a848-f79d5b59e4a0", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000/force-systemd-flag-396000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/fo
rce-systemd-flag-396000/bzimage,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-396000"}
	I0926 18:29:59.805791    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:29:59 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 9a61d1ec-a5f9-40ca-a848-f79d5b59e4a0 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000/force-systemd-flag-396000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000/console-ring -f kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000/bzimage,/Users/jenkins/minikube-integr
ation/19711-1128/.minikube/machines/force-systemd-flag-396000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-396000"
	I0926 18:29:59.805800    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:29:59 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0926 18:29:59.808766    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:29:59 DEBUG: hyperkit: Pid is 6223
	I0926 18:29:59.809226    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 0
	I0926 18:29:59.809248    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:29:59.809283    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6223
	I0926 18:29:59.810210    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for 1e:11:eb:17:9c:53 in /var/db/dhcpd_leases ...
	I0926 18:29:59.810268    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:29:59.810284    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:29:59.810303    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:29:59.810319    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:29:59.810332    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:29:59.810343    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:29:59.810358    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:29:59.810377    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:29:59.810394    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:29:59.810422    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:29:59.810440    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:29:59.810454    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:29:59.810466    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:29:59.810477    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:29:59.810489    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:29:59.810502    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:29:59.810514    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:29:59.810527    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:29:59.810540    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:29:59.816288    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:29:59 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I0926 18:29:59.824333    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:29:59 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0926 18:29:59.825136    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:29:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 18:29:59.825148    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:29:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 18:29:59.825183    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:29:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 18:29:59.825219    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:29:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 18:30:00.199898    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:30:00 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0926 18:30:00.199916    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:30:00 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0926 18:30:00.315059    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:30:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 18:30:00.315085    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:30:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 18:30:00.315116    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:30:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 18:30:00.315129    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:30:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 18:30:00.315975    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:30:00 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0926 18:30:00.315988    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:30:00 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0926 18:30:01.811046    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 1
	I0926 18:30:01.811063    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:30:01.811119    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6223
	I0926 18:30:01.811932    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for 1e:11:eb:17:9c:53 in /var/db/dhcpd_leases ...
	I0926 18:30:01.811989    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:30:01.812010    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:30:01.812029    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:30:01.812041    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:30:01.812048    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:30:01.812057    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:30:01.812072    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:30:01.812084    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:30:01.812092    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:30:01.812100    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:30:01.812107    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:30:01.812112    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:30:01.812118    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:30:01.812124    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:30:01.812139    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:30:01.812147    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:30:01.812154    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:30:01.812161    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:30:01.812166    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:30:03.812681    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 2
	I0926 18:30:03.812698    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:30:03.812774    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6223
	I0926 18:30:03.813587    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for 1e:11:eb:17:9c:53 in /var/db/dhcpd_leases ...
	I0926 18:30:03.813622    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:30:03.813636    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:30:03.813672    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:30:03.813689    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:30:03.813699    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:30:03.813706    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:30:03.813713    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:30:03.813719    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:30:03.813779    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:30:03.813801    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:30:03.813813    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:30:03.813819    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:30:03.813831    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:30:03.813839    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:30:03.813845    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:30:03.813851    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:30:03.813857    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:30:03.813865    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:30:03.813873    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:30:05.815077    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 3
	I0926 18:30:05.815096    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:30:05.815209    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6223
	I0926 18:30:05.816002    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for 1e:11:eb:17:9c:53 in /var/db/dhcpd_leases ...
	I0926 18:30:05.816061    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:30:05.816075    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:30:05.816088    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:30:05.816098    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:30:05.816115    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:30:05.816124    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:30:05.816131    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:30:05.816138    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:30:05.816144    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:30:05.816152    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:30:05.816158    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:30:05.816164    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:30:05.816170    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:30:05.816177    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:30:05.816199    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:30:05.816211    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:30:05.816230    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:30:05.816242    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:30:05.816251    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:30:05.829935    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:30:05 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0926 18:30:05.830099    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:30:05 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0926 18:30:05.830106    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:30:05 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0926 18:30:05.850067    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:30:05 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0926 18:30:07.817387    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 4
	I0926 18:30:07.817401    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:30:07.817514    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6223
	I0926 18:30:07.818298    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for 1e:11:eb:17:9c:53 in /var/db/dhcpd_leases ...
	I0926 18:30:07.818354    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:30:07.818363    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:30:07.818371    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:30:07.818378    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:30:07.818384    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:30:07.818392    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:30:07.818399    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:30:07.818405    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:30:07.818411    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:30:07.818417    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:30:07.818438    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:30:07.818451    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:30:07.818463    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:30:07.818472    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:30:07.818479    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:30:07.818487    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:30:07.818493    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:30:07.818509    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:30:07.818521    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:30:09.820591    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 5
	I0926 18:30:09.820607    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:30:09.820621    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6223
	I0926 18:30:09.821730    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for 1e:11:eb:17:9c:53 in /var/db/dhcpd_leases ...
	I0926 18:30:09.821771    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:30:09.821781    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:30:09.821793    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:30:09.821799    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:30:09.821806    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:30:09.821812    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:30:09.821818    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:30:09.821825    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:30:09.821831    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:30:09.821851    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:30:09.821862    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:30:09.821872    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:30:09.821885    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:30:09.821892    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:30:09.821900    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:30:09.821921    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:30:09.821934    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:30:09.821942    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:30:09.821949    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:30:11.822506    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 6
	I0926 18:30:11.822520    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:30:11.822606    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6223
	I0926 18:30:11.823385    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for 1e:11:eb:17:9c:53 in /var/db/dhcpd_leases ...
	I0926 18:30:11.823437    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:30:11.823446    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:30:11.823460    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:30:11.823471    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:30:11.823479    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:30:11.823485    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:30:11.823493    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:30:11.823499    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:30:11.823507    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:30:11.823515    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:30:11.823521    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:30:11.823527    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:30:11.823533    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:30:11.823541    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:30:11.823547    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:30:11.823555    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:30:11.823570    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:30:11.823578    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:30:11.823586    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:30:13.825676    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 7
	I0926 18:30:13.825688    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:30:13.825744    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6223
	I0926 18:30:13.826735    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for 1e:11:eb:17:9c:53 in /var/db/dhcpd_leases ...
	I0926 18:30:13.826784    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:30:13.826791    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:30:13.826810    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:30:13.826820    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:30:13.826845    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:30:13.826858    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:30:13.826866    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:30:13.826872    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:30:13.826881    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:30:13.826889    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:30:13.826896    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:30:13.826905    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:30:13.826917    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:30:13.826925    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:30:13.826938    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:30:13.826946    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:30:13.826953    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:30:13.826971    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:30:13.826982    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:30:15.828782    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 8
	I0926 18:30:15.828798    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:30:15.828899    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6223
	I0926 18:30:15.829676    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for 1e:11:eb:17:9c:53 in /var/db/dhcpd_leases ...
	I0926 18:30:15.829723    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:30:15.829738    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:30:15.829758    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:30:15.829766    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:30:15.829777    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:30:15.829788    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:30:15.829794    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:30:15.829800    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:30:15.829811    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:30:15.829822    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:30:15.829837    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:30:15.829851    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:30:15.829859    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:30:15.829865    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:30:15.829871    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:30:15.829879    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:30:15.829885    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:30:15.829892    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:30:15.829908    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:30:17.831749    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 9
	I0926 18:30:17.831765    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:30:17.831801    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6223
	I0926 18:30:17.832603    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for 1e:11:eb:17:9c:53 in /var/db/dhcpd_leases ...
	I0926 18:30:17.832666    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:30:17.832676    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:30:17.832686    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:30:17.832695    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:30:17.832702    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:30:17.832707    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:30:17.832718    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:30:17.832727    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:30:17.832733    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:30:17.832740    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:30:17.832746    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:30:17.832751    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:30:17.832765    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:30:17.832778    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:30:17.832785    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:30:17.832793    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:30:17.832806    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:30:17.832815    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:30:17.832826    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:30:19.834827    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 10
	I0926 18:30:19.834839    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:30:19.834929    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6223
	I0926 18:30:19.835718    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for 1e:11:eb:17:9c:53 in /var/db/dhcpd_leases ...
	I0926 18:30:19.835756    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:30:19.835765    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:30:19.835780    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:30:19.835800    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:30:19.835813    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:30:19.835829    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:30:19.835837    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:30:19.835843    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:30:19.835850    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:30:19.835871    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:30:19.835882    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:30:19.835892    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:30:19.835899    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:30:19.835906    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:30:19.835915    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:30:19.835928    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:30:19.835934    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:30:19.835942    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:30:19.835950    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:30:21.837539    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 11
	I0926 18:30:21.837553    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:30:21.837623    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6223
	I0926 18:30:21.838485    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for 1e:11:eb:17:9c:53 in /var/db/dhcpd_leases ...
	I0926 18:30:21.838526    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:30:21.838538    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:30:21.838546    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:30:21.838555    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:30:21.838566    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:30:21.838576    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:30:21.838586    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:30:21.838594    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:30:21.838602    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:30:21.838608    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:30:21.838614    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:30:21.838621    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:30:21.838631    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:30:21.838636    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:30:21.838654    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:30:21.838666    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:30:21.838674    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:30:21.838684    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:30:21.838693    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:30:23.840307    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 12
	I0926 18:30:23.840329    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:30:23.840378    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6223
	I0926 18:30:23.841155    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for 1e:11:eb:17:9c:53 in /var/db/dhcpd_leases ...
	I0926 18:30:23.841200    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:30:23.841210    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:30:23.841219    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:30:23.841226    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:30:23.841234    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:30:23.841239    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:30:23.841245    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:30:23.841252    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:30:23.841277    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:30:23.841289    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:30:23.841297    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:30:23.841307    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:30:23.841314    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:30:23.841320    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:30:23.841328    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:30:23.841339    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:30:23.841351    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:30:23.841360    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:30:23.841369    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:30:25.842952    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 13
	I0926 18:30:25.842963    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:30:25.843021    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6223
	I0926 18:30:25.844059    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for 1e:11:eb:17:9c:53 in /var/db/dhcpd_leases ...
	I0926 18:30:25.844094    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:30:25.844107    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:30:25.844116    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:30:25.844128    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:30:25.844137    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:30:25.844154    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:30:25.844161    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:30:25.844168    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:30:25.844174    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:30:25.844188    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:30:25.844200    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:30:25.844208    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:30:25.844215    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:30:25.844222    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:30:25.844230    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:30:25.844247    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:30:25.844259    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:30:25.844267    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:30:25.844275    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:30:27.845036    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 14
	I0926 18:30:27.845053    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:30:27.845110    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6223
	I0926 18:30:27.845906    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for 1e:11:eb:17:9c:53 in /var/db/dhcpd_leases ...
	I0926 18:30:27.845942    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:30:27.845949    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:30:27.845958    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:30:27.845966    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:30:27.845993    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:30:27.846005    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:30:27.846013    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:30:27.846018    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:30:27.846026    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:30:27.846038    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:30:27.846055    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:30:27.846069    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:30:27.846077    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:30:27.846085    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:30:27.846102    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:30:27.846115    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:30:27.846123    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:30:27.846129    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:30:27.846137    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:30:29.847846    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 15
	I0926 18:30:29.847862    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:30:29.847872    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6223
	I0926 18:30:29.848657    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for 1e:11:eb:17:9c:53 in /var/db/dhcpd_leases ...
	I0926 18:30:29.848711    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:30:29.848722    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:30:29.848733    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:30:29.848743    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:30:29.848750    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:30:29.848759    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:30:29.848768    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:30:29.848776    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:30:29.848783    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:30:29.848790    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:30:29.848797    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:30:29.848808    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:30:29.848816    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:30:29.848822    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:30:29.848829    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:30:29.848845    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:30:29.848856    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:30:29.848864    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:30:29.848872    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:30:31.849409    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 16
	I0926 18:30:31.849424    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:30:31.849520    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6223
	I0926 18:30:31.850406    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for 1e:11:eb:17:9c:53 in /var/db/dhcpd_leases ...
	I0926 18:30:31.850449    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:30:31.850461    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:30:31.850469    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:30:31.850481    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:30:31.850491    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:30:31.850499    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:30:31.850506    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:30:31.850511    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:30:31.850518    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:30:31.850525    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:30:31.850549    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:30:31.850560    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:30:31.850567    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:30:31.850576    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:30:31.850585    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:30:31.850593    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:30:31.850601    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:30:31.850607    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:30:31.850613    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:30:33.852680    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 17
	I0926 18:30:33.852694    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:30:33.852740    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6223
	I0926 18:30:33.853563    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for 1e:11:eb:17:9c:53 in /var/db/dhcpd_leases ...
	I0926 18:30:33.853625    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:30:33.853635    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:30:33.853642    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:30:33.853647    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:30:33.853655    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:30:33.853661    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:30:33.853684    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:30:33.853699    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:30:33.853706    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:30:33.853712    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:30:33.853718    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:30:33.853724    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:30:33.853731    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:30:33.853738    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:30:33.853753    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:30:33.853761    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:30:33.853768    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:30:33.853775    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:30:33.853783    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:30:35.854086    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 18
	I0926 18:30:35.854100    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:30:35.854169    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6223
	I0926 18:30:35.855159    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for 1e:11:eb:17:9c:53 in /var/db/dhcpd_leases ...
	I0926 18:30:35.855202    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:30:35.855209    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:30:35.855219    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:30:35.855230    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:30:35.855253    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:30:35.855265    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:30:35.855273    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:30:35.855281    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:30:35.855294    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:30:35.855303    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:30:35.855312    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:30:35.855320    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:30:35.855327    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:30:35.855333    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:30:35.855339    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:30:35.855347    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:30:35.855354    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:30:35.855361    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:30:35.855369    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:30:37.855807    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 19
	I0926 18:30:37.855825    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:30:37.855877    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6223
	I0926 18:30:37.856701    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for 1e:11:eb:17:9c:53 in /var/db/dhcpd_leases ...
	I0926 18:30:37.856737    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:30:37.856758    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:30:37.856774    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:30:37.856783    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:30:37.856790    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:30:37.856796    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:30:37.856819    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:30:37.856829    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:30:37.856837    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:30:37.856845    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:30:37.856852    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:30:37.856859    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:30:37.856866    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:30:37.856874    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:30:37.856879    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:30:37.856887    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:30:37.856902    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:30:37.856910    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:30:37.856923    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:30:39.858807    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 20
	I0926 18:30:39.858822    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:30:39.858854    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6223
	I0926 18:30:39.859708    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for 1e:11:eb:17:9c:53 in /var/db/dhcpd_leases ...
	I0926 18:30:39.859760    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:30:39.859772    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:30:39.859783    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:30:39.859794    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:30:39.859808    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:30:39.859815    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:30:39.859821    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:30:39.859839    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:30:39.859850    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:30:39.859859    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:30:39.859865    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:30:39.859876    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:30:39.859889    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:30:39.859899    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:30:39.859906    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:30:39.859922    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:30:39.859931    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:30:39.859942    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:30:39.859950    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:30:41.860378    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 21
	I0926 18:30:41.860394    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:30:41.860459    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6223
	I0926 18:30:41.861283    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for 1e:11:eb:17:9c:53 in /var/db/dhcpd_leases ...
	I0926 18:30:41.861334    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:30:41.861346    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:30:41.861356    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:30:41.861366    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:30:41.861376    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:30:41.861382    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:30:41.861390    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:30:41.861396    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:30:41.861409    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:30:41.861421    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:30:41.861429    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:30:41.861440    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:30:41.861458    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:30:41.861466    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:30:41.861475    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:30:41.861487    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:30:41.861494    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:30:41.861500    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:30:41.861516    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:30:43.863114    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 22
	I0926 18:30:43.863128    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:30:43.863182    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6223
	I0926 18:30:43.864069    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for 1e:11:eb:17:9c:53 in /var/db/dhcpd_leases ...
	I0926 18:30:43.864117    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:30:43.864130    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:30:43.864142    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:30:43.864152    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:30:43.864162    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:30:43.864169    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:30:43.864180    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:30:43.864189    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:30:43.864196    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:30:43.864202    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:30:43.864210    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:30:43.864218    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:30:43.864225    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:30:43.864236    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:30:43.864244    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:30:43.864250    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:30:43.864257    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:30:43.864262    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:30:43.864270    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:30:45.865766    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 23
	I0926 18:30:45.865780    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:30:45.865862    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6223
	I0926 18:30:45.866726    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for 1e:11:eb:17:9c:53 in /var/db/dhcpd_leases ...
	I0926 18:30:45.866771    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:30:45.866780    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:30:45.866787    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:30:45.866793    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:30:45.866802    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:30:45.866814    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:30:45.866832    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:30:45.866844    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:30:45.866861    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:30:45.866874    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:30:45.866883    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:30:45.866890    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:30:45.866897    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:30:45.866905    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:30:45.866919    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:30:45.866930    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:30:45.866946    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:30:45.866958    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:30:45.866987    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:30:47.867878    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 24
	I0926 18:30:47.867892    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:30:47.867975    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6223
	I0926 18:30:47.868779    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for 1e:11:eb:17:9c:53 in /var/db/dhcpd_leases ...
	I0926 18:30:47.868827    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:30:47.868843    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:30:47.868857    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:30:47.868865    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:30:47.868873    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:30:47.868880    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:30:47.868886    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:30:47.868892    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:30:47.868907    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:30:47.868918    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:30:47.868935    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:30:47.868943    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:30:47.868952    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:30:47.868960    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:30:47.868970    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:30:47.868977    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:30:47.868983    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:30:47.868988    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:30:47.869004    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:30:49.869237    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 25
	I0926 18:30:49.869250    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:30:49.869317    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6223
	I0926 18:30:49.870116    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for 1e:11:eb:17:9c:53 in /var/db/dhcpd_leases ...
	I0926 18:30:49.870164    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:30:49.870174    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:30:49.870183    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:30:49.870192    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:30:49.870203    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:30:49.870214    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:30:49.870242    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:30:49.870252    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:30:49.870267    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:30:49.870280    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:30:49.870290    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:30:49.870298    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:30:49.870305    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:30:49.870312    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:30:49.870319    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:30:49.870325    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:30:49.870334    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:30:49.870340    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:30:49.870348    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:30:51.872355    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 26
	I0926 18:30:51.872370    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:30:51.872459    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6223
	I0926 18:30:51.873568    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for 1e:11:eb:17:9c:53 in /var/db/dhcpd_leases ...
	I0926 18:30:51.873624    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:30:51.873638    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:30:51.873653    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:30:51.873664    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:30:51.873683    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:30:51.873696    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:30:51.873704    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:30:51.873712    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:30:51.873723    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:30:51.873734    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:30:51.873743    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:30:51.873756    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:30:51.873765    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:30:51.873773    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:30:51.873786    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:30:51.873792    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:30:51.873800    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:30:51.873808    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:30:51.873815    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:30:53.874004    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 27
	I0926 18:30:53.874016    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:30:53.874131    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6223
	I0926 18:30:53.875172    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for 1e:11:eb:17:9c:53 in /var/db/dhcpd_leases ...
	I0926 18:30:53.875220    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:30:53.875230    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:30:53.875239    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:30:53.875245    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:30:53.875251    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:30:53.875258    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:30:53.875264    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:30:53.875271    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:30:53.875277    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:30:53.875283    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:30:53.875289    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:30:53.875295    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:30:53.875309    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:30:53.875324    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:30:53.875342    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:30:53.875354    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:30:53.875362    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:30:53.875372    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:30:53.875379    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:30:55.876081    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 28
	I0926 18:30:55.876097    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:30:55.876157    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6223
	I0926 18:30:55.877203    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for 1e:11:eb:17:9c:53 in /var/db/dhcpd_leases ...
	I0926 18:30:55.877212    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:30:55.877223    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:30:55.877234    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:30:55.877242    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:30:55.877248    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:30:55.877256    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:30:55.877277    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:30:55.877290    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:30:55.877299    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:30:55.877307    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:30:55.877314    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:30:55.877322    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:30:55.877337    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:30:55.877349    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:30:55.877358    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:30:55.877366    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:30:55.877373    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:30:55.877381    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:30:55.877397    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:30:57.879418    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 29
	I0926 18:30:57.879429    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:30:57.879498    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6223
	I0926 18:30:57.880360    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for 1e:11:eb:17:9c:53 in /var/db/dhcpd_leases ...
	I0926 18:30:57.880413    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:30:57.880429    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:30:57.880452    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:30:57.880462    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:30:57.880474    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:30:57.880485    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:30:57.880491    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:30:57.880497    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:30:57.880503    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:30:57.880511    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:30:57.880529    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:30:57.880537    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:30:57.880544    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:30:57.880552    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:30:57.880559    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:30:57.880566    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:30:57.880576    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:30:57.880585    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:30:57.880601    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:30:59.881779    6185 client.go:171] duration metric: took 1m1.005853661s to LocalClient.Create
	I0926 18:31:01.882572    6185 start.go:128] duration metric: took 1m3.059847091s to createHost
	I0926 18:31:01.882594    6185 start.go:83] releasing machines lock for "force-systemd-flag-396000", held for 1m3.060006536s
	W0926 18:31:01.882610    6185 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 1e:11:eb:17:9c:53
	I0926 18:31:01.882992    6185 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 18:31:01.883022    6185 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 18:31:01.891667    6185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53863
	I0926 18:31:01.892028    6185 main.go:141] libmachine: () Calling .GetVersion
	I0926 18:31:01.892380    6185 main.go:141] libmachine: Using API Version  1
	I0926 18:31:01.892399    6185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 18:31:01.892625    6185 main.go:141] libmachine: () Calling .GetMachineName
	I0926 18:31:01.892998    6185 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 18:31:01.893023    6185 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 18:31:01.901458    6185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53865
	I0926 18:31:01.901793    6185 main.go:141] libmachine: () Calling .GetVersion
	I0926 18:31:01.902152    6185 main.go:141] libmachine: Using API Version  1
	I0926 18:31:01.902166    6185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 18:31:01.902370    6185 main.go:141] libmachine: () Calling .GetMachineName
	I0926 18:31:01.902506    6185 main.go:141] libmachine: (force-systemd-flag-396000) Calling .GetState
	I0926 18:31:01.902598    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:31:01.902689    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6223
	I0926 18:31:01.903642    6185 main.go:141] libmachine: (force-systemd-flag-396000) Calling .DriverName
	I0926 18:31:01.949031    6185 out.go:177] * Deleting "force-systemd-flag-396000" in hyperkit ...
	I0926 18:31:02.006804    6185 main.go:141] libmachine: (force-systemd-flag-396000) Calling .Remove
	I0926 18:31:02.006967    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:31:02.006977    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:31:02.007055    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6223
	I0926 18:31:02.008016    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:31:02.008065    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | waiting for graceful shutdown
	I0926 18:31:03.009279    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:31:03.009332    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6223
	I0926 18:31:03.010250    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | waiting for graceful shutdown
	I0926 18:31:04.012354    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:31:04.012515    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6223
	I0926 18:31:04.014189    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | waiting for graceful shutdown
	I0926 18:31:05.014816    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:31:05.014917    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6223
	I0926 18:31:05.015555    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | waiting for graceful shutdown
	I0926 18:31:06.016697    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:31:06.016805    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6223
	I0926 18:31:06.017554    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | waiting for graceful shutdown
	I0926 18:31:07.018166    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:31:07.018185    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6223
	I0926 18:31:07.019271    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | sending sigkill
	I0926 18:31:07.019281    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:31:07.031371    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:31:07 WARN : hyperkit: failed to read stdout: EOF
	I0926 18:31:07.031390    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:31:07 WARN : hyperkit: failed to read stderr: EOF
	W0926 18:31:07.054722    6185 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 1e:11:eb:17:9c:53
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 1e:11:eb:17:9c:53
	I0926 18:31:07.054743    6185 start.go:729] Will try again in 5 seconds ...
	I0926 18:31:12.056836    6185 start.go:360] acquireMachinesLock for force-systemd-flag-396000: {Name:mk62b0ec9b6170d1971239573141a625c4a2621e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:32:04.885640    6185 start.go:364] duration metric: took 52.828289486s to acquireMachinesLock for "force-systemd-flag-396000"
	I0926 18:32:04.885680    6185 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-396000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-396000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 18:32:04.885744    6185 start.go:125] createHost starting for "" (driver="hyperkit")
	I0926 18:32:04.906901    6185 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0926 18:32:04.906987    6185 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 18:32:04.907002    6185 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 18:32:04.915808    6185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53876
	I0926 18:32:04.916264    6185 main.go:141] libmachine: () Calling .GetVersion
	I0926 18:32:04.916677    6185 main.go:141] libmachine: Using API Version  1
	I0926 18:32:04.916697    6185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 18:32:04.916895    6185 main.go:141] libmachine: () Calling .GetMachineName
	I0926 18:32:04.917050    6185 main.go:141] libmachine: (force-systemd-flag-396000) Calling .GetMachineName
	I0926 18:32:04.917142    6185 main.go:141] libmachine: (force-systemd-flag-396000) Calling .DriverName
	I0926 18:32:04.917244    6185 start.go:159] libmachine.API.Create for "force-systemd-flag-396000" (driver="hyperkit")
	I0926 18:32:04.917256    6185 client.go:168] LocalClient.Create starting
	I0926 18:32:04.917279    6185 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem
	I0926 18:32:04.917329    6185 main.go:141] libmachine: Decoding PEM data...
	I0926 18:32:04.917340    6185 main.go:141] libmachine: Parsing certificate...
	I0926 18:32:04.917387    6185 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem
	I0926 18:32:04.917460    6185 main.go:141] libmachine: Decoding PEM data...
	I0926 18:32:04.917470    6185 main.go:141] libmachine: Parsing certificate...
	I0926 18:32:04.917483    6185 main.go:141] libmachine: Running pre-create checks...
	I0926 18:32:04.917488    6185 main.go:141] libmachine: (force-systemd-flag-396000) Calling .PreCreateCheck
	I0926 18:32:04.917561    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:32:04.917587    6185 main.go:141] libmachine: (force-systemd-flag-396000) Calling .GetConfigRaw
	I0926 18:32:04.951005    6185 main.go:141] libmachine: Creating machine...
	I0926 18:32:04.951013    6185 main.go:141] libmachine: (force-systemd-flag-396000) Calling .Create
	I0926 18:32:04.951151    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:32:04.951346    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | I0926 18:32:04.951155    6576 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19711-1128/.minikube
	I0926 18:32:04.951376    6185 main.go:141] libmachine: (force-systemd-flag-396000) Downloading /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1128/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0926 18:32:05.157477    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | I0926 18:32:05.157381    6576 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000/id_rsa...
	I0926 18:32:05.475076    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | I0926 18:32:05.474973    6576 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000/force-systemd-flag-396000.rawdisk...
	I0926 18:32:05.475092    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Writing magic tar header
	I0926 18:32:05.475102    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Writing SSH key tar header
	I0926 18:32:05.475666    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | I0926 18:32:05.475615    6576 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000 ...
	I0926 18:32:05.844637    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:32:05.844653    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000/hyperkit.pid
	I0926 18:32:05.844666    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Using UUID 747a2896-a191-4c9c-95ab-9da555ffe005
	I0926 18:32:05.869999    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Generated MAC ca:3d:d3:5e:e6:32
	I0926 18:32:05.870018    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-396000
	I0926 18:32:05.870059    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:32:05 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"747a2896-a191-4c9c-95ab-9da555ffe005", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0005921b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0926 18:32:05.870090    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:32:05 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"747a2896-a191-4c9c-95ab-9da555ffe005", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0005921b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0926 18:32:05.870148    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:32:05 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "747a2896-a191-4c9c-95ab-9da555ffe005", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000/force-systemd-flag-396000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/fo
rce-systemd-flag-396000/bzimage,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-396000"}
	I0926 18:32:05.870189    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:32:05 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 747a2896-a191-4c9c-95ab-9da555ffe005 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000/force-systemd-flag-396000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000/console-ring -f kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000/bzimage,/Users/jenkins/minikube-integr
ation/19711-1128/.minikube/machines/force-systemd-flag-396000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-396000"
	I0926 18:32:05.870199    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:32:05 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0926 18:32:05.873081    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:32:05 DEBUG: hyperkit: Pid is 6577
	I0926 18:32:05.873477    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 0
	I0926 18:32:05.873493    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:32:05.873567    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6577
	I0926 18:32:05.874509    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for ca:3d:d3:5e:e6:32 in /var/db/dhcpd_leases ...
	I0926 18:32:05.874583    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:32:05.874607    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:32:05.874634    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:32:05.874670    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:32:05.874700    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:32:05.874724    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:32:05.874744    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:32:05.874778    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:32:05.874796    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:32:05.874820    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:32:05.874829    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:32:05.874846    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:32:05.874852    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:32:05.874862    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:32:05.874873    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:32:05.874889    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:32:05.874918    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:32:05.874949    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:32:05.874960    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:32:05.880907    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:32:05 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I0926 18:32:05.888928    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:32:05 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-flag-396000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0926 18:32:05.889769    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:32:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 18:32:05.889784    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:32:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 18:32:05.889800    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:32:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 18:32:05.889811    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:32:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 18:32:06.266327    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:32:06 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0926 18:32:06.266342    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:32:06 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0926 18:32:06.381849    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:32:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 18:32:06.381884    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:32:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 18:32:06.381917    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:32:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 18:32:06.381935    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:32:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 18:32:06.382772    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:32:06 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0926 18:32:06.382785    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:32:06 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0926 18:32:07.875459    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 1
	I0926 18:32:07.875477    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:32:07.875586    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6577
	I0926 18:32:07.876409    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for ca:3d:d3:5e:e6:32 in /var/db/dhcpd_leases ...
	I0926 18:32:07.876478    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:32:07.876490    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:32:07.876499    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:32:07.876504    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:32:07.876510    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:32:07.876518    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:32:07.876524    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:32:07.876532    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:32:07.876542    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:32:07.876554    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:32:07.876562    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:32:07.876570    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:32:07.876575    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:32:07.876595    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:32:07.876605    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:32:07.876612    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:32:07.876627    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:32:07.876633    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:32:07.876643    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:32:09.878158    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 2
	I0926 18:32:09.878176    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:32:09.878249    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6577
	I0926 18:32:09.879068    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for ca:3d:d3:5e:e6:32 in /var/db/dhcpd_leases ...
	I0926 18:32:09.879113    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:32:09.879123    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:32:09.879132    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:32:09.879140    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:32:09.879147    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:32:09.879154    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:32:09.879162    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:32:09.879169    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:32:09.879175    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:32:09.879187    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:32:09.879202    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:32:09.879211    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:32:09.879217    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:32:09.879231    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:32:09.879237    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:32:09.879251    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:32:09.879260    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:32:09.879267    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:32:09.879275    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:32:11.810567    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:32:11 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0926 18:32:11.810721    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:32:11 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0926 18:32:11.810730    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:32:11 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0926 18:32:11.830352    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | 2024/09/26 18:32:11 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0926 18:32:11.881447    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 3
	I0926 18:32:11.881474    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:32:11.881656    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6577
	I0926 18:32:11.883450    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for ca:3d:d3:5e:e6:32 in /var/db/dhcpd_leases ...
	I0926 18:32:11.883534    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:32:11.883548    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:32:11.883560    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:32:11.883569    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:32:11.883601    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:32:11.883621    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:32:11.883631    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:32:11.883639    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:32:11.883668    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:32:11.883685    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:32:11.883707    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:32:11.883724    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:32:11.883735    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:32:11.883746    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:32:11.883756    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:32:11.883764    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:32:11.883784    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:32:11.883800    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:32:11.883811    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:32:13.884491    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 4
	I0926 18:32:13.884518    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:32:13.884592    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6577
	I0926 18:32:13.885410    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for ca:3d:d3:5e:e6:32 in /var/db/dhcpd_leases ...
	I0926 18:32:13.885467    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:32:13.885478    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:32:13.885498    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:32:13.885507    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:32:13.885516    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:32:13.885523    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:32:13.885530    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:32:13.885537    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:32:13.885549    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:32:13.885558    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:32:13.885565    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:32:13.885572    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:32:13.885598    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:32:13.885622    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:32:13.885633    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:32:13.885640    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:32:13.885648    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:32:13.885654    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:32:13.885662    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:32:15.887351    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 5
	I0926 18:32:15.887367    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:32:15.887432    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6577
	I0926 18:32:15.888253    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for ca:3d:d3:5e:e6:32 in /var/db/dhcpd_leases ...
	I0926 18:32:15.888304    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:32:15.888317    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:32:15.888346    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:32:15.888358    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:32:15.888370    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:32:15.888378    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:32:15.888385    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:32:15.888390    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:32:15.888397    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:32:15.888403    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:32:15.888410    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:32:15.888422    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:32:15.888432    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:32:15.888439    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:32:15.888446    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:32:15.888465    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:32:15.888476    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:32:15.888489    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:32:15.888499    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:32:17.889251    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 6
	I0926 18:32:17.889265    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:32:17.889373    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6577
	I0926 18:32:17.890204    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for ca:3d:d3:5e:e6:32 in /var/db/dhcpd_leases ...
	I0926 18:32:17.890250    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:32:17.890260    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:32:17.890267    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:32:17.890272    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:32:17.890295    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:32:17.890304    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:32:17.890310    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:32:17.890322    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:32:17.890330    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:32:17.890342    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:32:17.890349    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:32:17.890354    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:32:17.890367    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:32:17.890375    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:32:17.890393    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:32:17.890406    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:32:17.890420    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:32:17.890429    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:32:17.890438    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:32:19.892460    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 7
	I0926 18:32:19.892483    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:32:19.892590    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6577
	I0926 18:32:19.893431    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for ca:3d:d3:5e:e6:32 in /var/db/dhcpd_leases ...
	I0926 18:32:19.893503    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:32:19.893538    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:32:19.893553    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:32:19.893559    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:32:19.893566    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:32:19.893573    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:32:19.893594    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:32:19.893601    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:32:19.893608    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:32:19.893613    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:32:19.893620    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:32:19.893625    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:32:19.893636    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:32:19.893644    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:32:19.893652    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:32:19.893660    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:32:19.893668    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:32:19.893676    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:32:19.893684    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:32:21.894673    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 8
	I0926 18:32:21.894685    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:32:21.894748    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6577
	I0926 18:32:21.895671    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for ca:3d:d3:5e:e6:32 in /var/db/dhcpd_leases ...
	I0926 18:32:21.895728    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:32:21.895745    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:32:21.895763    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:32:21.895786    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:32:21.895800    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:32:21.895811    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:32:21.895818    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:32:21.895826    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:32:21.895840    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:32:21.895853    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:32:21.895861    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:32:21.895868    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:32:21.895881    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:32:21.895891    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:32:21.895899    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:32:21.895904    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:32:21.895918    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:32:21.895929    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:32:21.895938    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:32:23.896269    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 9
	I0926 18:32:23.896284    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:32:23.896348    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6577
	I0926 18:32:23.897172    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for ca:3d:d3:5e:e6:32 in /var/db/dhcpd_leases ...
	I0926 18:32:23.897209    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:32:23.897227    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:32:23.897234    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:32:23.897242    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:32:23.897249    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:32:23.897256    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:32:23.897261    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:32:23.897268    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:32:23.897275    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:32:23.897302    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:32:23.897314    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:32:23.897330    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:32:23.897344    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:32:23.897352    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:32:23.897359    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:32:23.897373    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:32:23.897383    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:32:23.897399    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:32:23.897416    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:32:25.897870    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 10
	I0926 18:32:25.897883    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:32:25.897983    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6577
	I0926 18:32:25.898801    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for ca:3d:d3:5e:e6:32 in /var/db/dhcpd_leases ...
	I0926 18:32:25.898851    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:32:25.898861    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:32:25.898873    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:32:25.898880    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:32:25.898886    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:32:25.898894    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:32:25.898910    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:32:25.898924    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:32:25.898949    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:32:25.898980    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:32:25.899001    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:32:25.899018    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:32:25.899031    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:32:25.899040    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:32:25.899048    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:32:25.899055    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:32:25.899063    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:32:25.899086    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:32:25.899100    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:32:27.899860    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 11
	I0926 18:32:27.899871    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:32:27.899945    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6577
	I0926 18:32:27.900957    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for ca:3d:d3:5e:e6:32 in /var/db/dhcpd_leases ...
	I0926 18:32:27.901022    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:32:27.901031    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:32:27.901038    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:32:27.901047    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:32:27.901056    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:32:27.901061    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:32:27.901068    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:32:27.901082    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:32:27.901089    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:32:27.901097    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:32:27.901117    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:32:27.901128    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:32:27.901135    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:32:27.901146    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:32:27.901153    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:32:27.901159    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:32:27.901166    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:32:27.901173    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:32:27.901182    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:32:29.901258    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 12
	I0926 18:32:29.901274    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:32:29.901386    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6577
	I0926 18:32:29.902198    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for ca:3d:d3:5e:e6:32 in /var/db/dhcpd_leases ...
	I0926 18:32:29.902249    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:32:29.902261    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:32:29.902278    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:32:29.902287    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:32:29.902299    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:32:29.902317    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:32:29.902328    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:32:29.902335    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:32:29.902355    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:32:29.902367    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:32:29.902380    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:32:29.902392    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:32:29.902401    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:32:29.902410    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:32:29.902420    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:32:29.902432    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:32:29.902442    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:32:29.902455    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:32:29.902463    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:32:31.904087    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 13
	I0926 18:32:31.904101    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:32:31.904163    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6577
	I0926 18:32:31.904995    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for ca:3d:d3:5e:e6:32 in /var/db/dhcpd_leases ...
	I0926 18:32:31.905051    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:32:31.905063    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:32:31.905072    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:32:31.905080    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:32:31.905086    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:32:31.905094    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:32:31.905113    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:32:31.905126    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:32:31.905133    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:32:31.905141    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:32:31.905148    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:32:31.905155    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:32:31.905171    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:32:31.905184    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:32:31.905191    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:32:31.905199    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:32:31.905207    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:32:31.905215    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:32:31.905234    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:32:33.905721    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 14
	I0926 18:32:33.905736    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:32:33.905795    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6577
	I0926 18:32:33.906845    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for ca:3d:d3:5e:e6:32 in /var/db/dhcpd_leases ...
	I0926 18:32:33.906871    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:32:33.906880    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:32:33.906887    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:32:33.906894    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:32:33.906900    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:32:33.906906    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:32:33.906912    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:32:33.906921    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:32:33.906936    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:32:33.906944    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:32:33.906953    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:32:33.906960    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:32:33.906967    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:32:33.906973    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:32:33.906985    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:32:33.906997    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:32:33.907013    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:32:33.907031    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:32:33.907041    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:32:35.908289    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 15
	I0926 18:32:35.908301    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:32:35.908390    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6577
	I0926 18:32:35.909194    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for ca:3d:d3:5e:e6:32 in /var/db/dhcpd_leases ...
	I0926 18:32:35.909240    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:32:35.909252    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:32:35.909268    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:32:35.909275    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:32:35.909281    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:32:35.909287    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:32:35.909297    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:32:35.909305    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:32:35.909320    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:32:35.909332    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:32:35.909343    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:32:35.909350    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:32:35.909362    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:32:35.909378    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:32:35.909385    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:32:35.909392    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:32:35.909423    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:32:35.909435    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:32:35.909464    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:32:37.909480    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 16
	I0926 18:32:37.909494    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:32:37.909595    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6577
	I0926 18:32:37.910394    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for ca:3d:d3:5e:e6:32 in /var/db/dhcpd_leases ...
	I0926 18:32:37.910444    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:32:37.910456    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:32:37.910468    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:32:37.910474    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:32:37.910480    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:32:37.910488    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:32:37.910495    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:32:37.910501    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:32:37.910511    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:32:37.910539    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:32:37.910549    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:32:37.910555    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:32:37.910568    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:32:37.910577    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:32:37.910585    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:32:37.910592    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:32:37.910598    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:32:37.910605    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:32:37.910614    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:32:39.910741    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 17
	I0926 18:32:39.910754    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:32:39.910815    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6577
	I0926 18:32:39.911620    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for ca:3d:d3:5e:e6:32 in /var/db/dhcpd_leases ...
	I0926 18:32:39.911677    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:32:39.911690    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:32:39.911721    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:32:39.911734    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:32:39.911741    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:32:39.911749    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:32:39.911759    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:32:39.911768    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:32:39.911783    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:32:39.911798    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:32:39.911806    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:32:39.911821    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:32:39.911827    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:32:39.911836    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:32:39.911844    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:32:39.911851    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:32:39.911858    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:32:39.911885    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:32:39.911899    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:32:41.913912    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 18
	I0926 18:32:41.913926    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:32:41.913996    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6577
	I0926 18:32:41.914799    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for ca:3d:d3:5e:e6:32 in /var/db/dhcpd_leases ...
	I0926 18:32:41.914855    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:32:41.914862    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:32:41.914870    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:32:41.914880    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:32:41.914892    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:32:41.914902    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:32:41.914909    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:32:41.914916    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:32:41.914923    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:32:41.914930    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:32:41.914937    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:32:41.914946    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:32:41.914962    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:32:41.914973    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:32:41.914989    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:32:41.915001    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:32:41.915008    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:32:41.915015    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:32:41.915023    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:32:43.916146    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 19
	I0926 18:32:43.916166    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:32:43.916222    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6577
	I0926 18:32:43.917075    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for ca:3d:d3:5e:e6:32 in /var/db/dhcpd_leases ...
	I0926 18:32:43.917140    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:32:43.917156    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:32:43.917175    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:32:43.917207    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:32:43.917216    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:32:43.917221    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:32:43.917229    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:32:43.917234    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:32:43.917240    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:32:43.917255    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:32:43.917270    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:32:43.917284    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:32:43.917311    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:32:43.917325    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:32:43.917332    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:32:43.917342    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:32:43.917348    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:32:43.917355    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:32:43.917363    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:32:45.918122    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 20
	I0926 18:32:45.918136    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:32:45.918200    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6577
	I0926 18:32:45.919015    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for ca:3d:d3:5e:e6:32 in /var/db/dhcpd_leases ...
	I0926 18:32:45.919068    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:32:45.919081    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:32:45.919090    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:32:45.919096    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:32:45.919110    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:32:45.919127    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:32:45.919136    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:32:45.919143    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:32:45.919151    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:32:45.919160    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:32:45.919174    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:32:45.919182    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:32:45.919198    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:32:45.919209    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:32:45.919225    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:32:45.919235    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:32:45.919242    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:32:45.919250    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:32:45.919258    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:32:47.919375    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 21
	I0926 18:32:47.919395    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:32:47.919526    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6577
	I0926 18:32:47.920494    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for ca:3d:d3:5e:e6:32 in /var/db/dhcpd_leases ...
	I0926 18:32:47.920552    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:32:47.920563    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:32:47.920575    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:32:47.920583    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:32:47.920590    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:32:47.920598    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:32:47.920605    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:32:47.920613    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:32:47.920622    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:32:47.920639    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:32:47.920652    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:32:47.920659    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:32:47.920665    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:32:47.920671    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:32:47.920678    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:32:47.920686    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:32:47.920693    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:32:47.920701    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:32:47.920715    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:32:49.921731    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 22
	I0926 18:32:49.921743    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:32:49.921806    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6577
	I0926 18:32:49.922776    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for ca:3d:d3:5e:e6:32 in /var/db/dhcpd_leases ...
	I0926 18:32:49.922818    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:32:49.922831    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:32:49.922840    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:32:49.922846    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:32:49.922873    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:32:49.922887    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:32:49.922898    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:32:49.922906    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:32:49.922914    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:32:49.922923    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:32:49.922929    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:32:49.922938    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:32:49.922945    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:32:49.922953    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:32:49.922964    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:32:49.922973    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:32:49.922980    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:32:49.922988    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:32:49.923005    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:32:51.924999    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 23
	I0926 18:32:51.925014    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:32:51.925092    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6577
	I0926 18:32:51.926209    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for ca:3d:d3:5e:e6:32 in /var/db/dhcpd_leases ...
	I0926 18:32:51.926269    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:32:51.926283    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:32:51.926313    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:32:51.926320    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:32:51.926334    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:32:51.926347    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:32:51.926354    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:32:51.926363    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:32:51.926387    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:32:51.926399    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:32:51.926407    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:32:51.926414    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:32:51.926421    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:32:51.926429    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:32:51.926439    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:32:51.926445    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:32:51.926453    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:32:51.926461    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:32:51.926472    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:32:53.928482    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 24
	I0926 18:32:53.928497    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:32:53.928561    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6577
	I0926 18:32:53.929381    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for ca:3d:d3:5e:e6:32 in /var/db/dhcpd_leases ...
	I0926 18:32:53.929432    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:32:53.929444    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:32:53.929453    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:32:53.929459    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:32:53.929467    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:32:53.929474    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:32:53.929480    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:32:53.929487    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:32:53.929494    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:32:53.929503    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:32:53.929510    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:32:53.929517    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:32:53.929541    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:32:53.929552    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:32:53.929560    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:32:53.929572    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:32:53.929579    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:32:53.929587    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:32:53.929595    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:32:55.931604    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 25
	I0926 18:32:55.931617    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:32:55.931660    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6577
	I0926 18:32:55.932455    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for ca:3d:d3:5e:e6:32 in /var/db/dhcpd_leases ...
	I0926 18:32:55.932499    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:32:55.932511    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:32:55.932520    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:32:55.932526    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:32:55.932534    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:32:55.932539    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:32:55.932553    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:32:55.932562    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:32:55.932572    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:32:55.932577    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:32:55.932590    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:32:55.932600    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:32:55.932607    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:32:55.932613    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:32:55.932630    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:32:55.932637    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:32:55.932646    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:32:55.932653    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:32:55.932670    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:32:57.934702    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 26
	I0926 18:32:57.934726    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:32:57.934752    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6577
	I0926 18:32:57.935553    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for ca:3d:d3:5e:e6:32 in /var/db/dhcpd_leases ...
	I0926 18:32:57.935611    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:32:57.935624    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:32:57.935633    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:32:57.935644    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:32:57.935656    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:32:57.935666    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:32:57.935675    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:32:57.935681    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:32:57.935693    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:32:57.935706    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:32:57.935720    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:32:57.935732    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:32:57.935744    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:32:57.935751    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:32:57.935758    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:32:57.935772    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:32:57.935794    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:32:57.935804    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:32:57.935822    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:32:59.936012    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 27
	I0926 18:32:59.936026    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:32:59.936089    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6577
	I0926 18:32:59.936960    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for ca:3d:d3:5e:e6:32 in /var/db/dhcpd_leases ...
	I0926 18:32:59.937010    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:32:59.937021    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:32:59.937045    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:32:59.937052    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:32:59.937058    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:32:59.937068    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:32:59.937074    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:32:59.937085    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:32:59.937091    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:32:59.937098    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:32:59.937106    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:32:59.937113    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:32:59.937119    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:32:59.937133    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:32:59.937145    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:32:59.937152    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:32:59.937158    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:32:59.937170    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:32:59.937193    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:33:01.939201    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 28
	I0926 18:33:01.939222    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:33:01.939290    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6577
	I0926 18:33:01.940106    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for ca:3d:d3:5e:e6:32 in /var/db/dhcpd_leases ...
	I0926 18:33:01.940147    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:33:01.940160    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:33:01.940174    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:33:01.940181    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:33:01.940188    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:33:01.940193    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:33:01.940200    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:33:01.940207    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:33:01.940212    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:33:01.940229    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:33:01.940236    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:33:01.940245    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:33:01.940268    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:33:01.940281    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:33:01.940289    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:33:01.940298    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:33:01.940326    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:33:01.940342    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:33:01.940359    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:33:03.941566    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Attempt 29
	I0926 18:33:03.941578    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:33:03.941641    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | hyperkit pid from json: 6577
	I0926 18:33:03.942680    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Searching for ca:3d:d3:5e:e6:32 in /var/db/dhcpd_leases ...
	I0926 18:33:03.942715    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:33:03.942731    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:33:03.942742    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:33:03.942749    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:33:03.942755    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:33:03.942763    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:33:03.942772    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:33:03.942778    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:33:03.942784    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:33:03.942796    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:33:03.942805    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:33:03.942811    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:33:03.942829    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:33:03.942835    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:33:03.942841    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:33:03.942849    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:33:03.942860    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:33:03.942866    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:33:03.942874    6185 main.go:141] libmachine: (force-systemd-flag-396000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:33:05.944925    6185 client.go:171] duration metric: took 1m1.027106923s to LocalClient.Create
	I0926 18:33:07.945351    6185 start.go:128] duration metric: took 1m3.059025825s to createHost
	I0926 18:33:07.945363    6185 start.go:83] releasing machines lock for "force-systemd-flag-396000", held for 1m3.05913962s
	W0926 18:33:07.945433    6185 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p force-systemd-flag-396000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ca:3d:d3:5e:e6:32
	* Failed to start hyperkit VM. Running "minikube delete -p force-systemd-flag-396000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ca:3d:d3:5e:e6:32
	I0926 18:33:08.008519    6185 out.go:201] 
	W0926 18:33:08.029655    6185 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ca:3d:d3:5e:e6:32
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ca:3d:d3:5e:e6:32
	W0926 18:33:08.029668    6185 out.go:270] * 
	* 
	W0926 18:33:08.030337    6185 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 18:33:08.092599    6185 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-flag-396000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-396000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-flag-396000 ssh "docker info --format {{.CgroupDriver}}": exit status 50 (182.644774ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node force-systemd-flag-396000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-flag-396000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 50
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-09-26 18:33:08.387613 -0700 PDT m=+4751.120242540
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-396000 -n force-systemd-flag-396000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-396000 -n force-systemd-flag-396000: exit status 7 (81.710489ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0926 18:33:08.467476    6590 status.go:386] failed to get driver ip: getting IP: IP address is not set
	E0926 18:33:08.467494    6590 status.go:119] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-396000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "force-systemd-flag-396000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-396000
E0926 18:33:13.544748    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/skaffold-729000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-396000: (5.274381889s)
--- FAIL: TestForceSystemdFlag (252.30s)

                                                
                                    
x
+
TestForceSystemdEnv (233.75s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-761000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit 
E0926 18:28:14.523175    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-env-761000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (3m48.15466581s)

                                                
                                                
-- stdout --
	* [force-systemd-env-761000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1128/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1128/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the hyperkit driver based on user configuration
	* Starting "force-systemd-env-761000" primary control-plane node in "force-systemd-env-761000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "force-systemd-env-761000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 18:26:10.909605    6117 out.go:345] Setting OutFile to fd 1 ...
	I0926 18:26:10.909860    6117 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:26:10.909865    6117 out.go:358] Setting ErrFile to fd 2...
	I0926 18:26:10.909869    6117 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:26:10.910035    6117 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1128/.minikube/bin
	I0926 18:26:10.911568    6117 out.go:352] Setting JSON to false
	I0926 18:26:10.933869    6117 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":5140,"bootTime":1727395230,"procs":441,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0926 18:26:10.934022    6117 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 18:26:10.965893    6117 out.go:177] * [force-systemd-env-761000] minikube v1.34.0 on Darwin 14.6.1
	I0926 18:26:11.012473    6117 notify.go:220] Checking for updates...
	I0926 18:26:11.032342    6117 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 18:26:11.053282    6117 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1128/kubeconfig
	I0926 18:26:11.074390    6117 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0926 18:26:11.095322    6117 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 18:26:11.116178    6117 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1128/.minikube
	I0926 18:26:11.137384    6117 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0926 18:26:11.158842    6117 config.go:182] Loaded profile config "offline-docker-713000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 18:26:11.158924    6117 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 18:26:11.187200    6117 out.go:177] * Using the hyperkit driver based on user configuration
	I0926 18:26:11.228251    6117 start.go:297] selected driver: hyperkit
	I0926 18:26:11.228261    6117 start.go:901] validating driver "hyperkit" against <nil>
	I0926 18:26:11.228270    6117 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 18:26:11.231048    6117 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:26:11.231168    6117 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19711-1128/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0926 18:26:11.239373    6117 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0926 18:26:11.243074    6117 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 18:26:11.243095    6117 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0926 18:26:11.243128    6117 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0926 18:26:11.243372    6117 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0926 18:26:11.243403    6117 cni.go:84] Creating CNI manager for ""
	I0926 18:26:11.243445    6117 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 18:26:11.243453    6117 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0926 18:26:11.243509    6117 start.go:340] cluster config:
	{Name:force-systemd-env-761000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-761000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 18:26:11.243590    6117 iso.go:125] acquiring lock: {Name:mka8a9c5a237c1e4ae233281d2ff7965d13a843d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:26:11.264312    6117 out.go:177] * Starting "force-systemd-env-761000" primary control-plane node in "force-systemd-env-761000" cluster
	I0926 18:26:11.285259    6117 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 18:26:11.285287    6117 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0926 18:26:11.285305    6117 cache.go:56] Caching tarball of preloaded images
	I0926 18:26:11.285388    6117 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0926 18:26:11.285396    6117 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 18:26:11.285473    6117 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/force-systemd-env-761000/config.json ...
	I0926 18:26:11.285492    6117 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/force-systemd-env-761000/config.json: {Name:mk619b82b8161d2b9c1b238c5922c167d9334471 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 18:26:11.285782    6117 start.go:360] acquireMachinesLock for force-systemd-env-761000: {Name:mk62b0ec9b6170d1971239573141a625c4a2621e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:26:49.999878    6117 start.go:364] duration metric: took 38.713726479s to acquireMachinesLock for "force-systemd-env-761000"
	I0926 18:26:49.999915    6117 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-761000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-761000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 18:26:49.999963    6117 start.go:125] createHost starting for "" (driver="hyperkit")
	I0926 18:26:50.021630    6117 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0926 18:26:50.021793    6117 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 18:26:50.021828    6117 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 18:26:50.030242    6117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53826
	I0926 18:26:50.030583    6117 main.go:141] libmachine: () Calling .GetVersion
	I0926 18:26:50.030994    6117 main.go:141] libmachine: Using API Version  1
	I0926 18:26:50.031004    6117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 18:26:50.031227    6117 main.go:141] libmachine: () Calling .GetMachineName
	I0926 18:26:50.031344    6117 main.go:141] libmachine: (force-systemd-env-761000) Calling .GetMachineName
	I0926 18:26:50.031447    6117 main.go:141] libmachine: (force-systemd-env-761000) Calling .DriverName
	I0926 18:26:50.031572    6117 start.go:159] libmachine.API.Create for "force-systemd-env-761000" (driver="hyperkit")
	I0926 18:26:50.031612    6117 client.go:168] LocalClient.Create starting
	I0926 18:26:50.031643    6117 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem
	I0926 18:26:50.031693    6117 main.go:141] libmachine: Decoding PEM data...
	I0926 18:26:50.031707    6117 main.go:141] libmachine: Parsing certificate...
	I0926 18:26:50.031772    6117 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem
	I0926 18:26:50.031816    6117 main.go:141] libmachine: Decoding PEM data...
	I0926 18:26:50.031824    6117 main.go:141] libmachine: Parsing certificate...
	I0926 18:26:50.031835    6117 main.go:141] libmachine: Running pre-create checks...
	I0926 18:26:50.031844    6117 main.go:141] libmachine: (force-systemd-env-761000) Calling .PreCreateCheck
	I0926 18:26:50.031924    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:26:50.032117    6117 main.go:141] libmachine: (force-systemd-env-761000) Calling .GetConfigRaw
	I0926 18:26:50.042591    6117 main.go:141] libmachine: Creating machine...
	I0926 18:26:50.042602    6117 main.go:141] libmachine: (force-systemd-env-761000) Calling .Create
	I0926 18:26:50.042723    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:26:50.042818    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | I0926 18:26:50.042681    6133 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19711-1128/.minikube
	I0926 18:26:50.042860    6117 main.go:141] libmachine: (force-systemd-env-761000) Downloading /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1128/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0926 18:26:50.244395    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | I0926 18:26:50.244301    6133 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000/id_rsa...
	I0926 18:26:50.352024    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | I0926 18:26:50.351938    6133 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000/force-systemd-env-761000.rawdisk...
	I0926 18:26:50.352035    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Writing magic tar header
	I0926 18:26:50.352045    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Writing SSH key tar header
	I0926 18:26:50.352593    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | I0926 18:26:50.352552    6133 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000 ...
	I0926 18:26:50.716989    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:26:50.717005    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000/hyperkit.pid
	I0926 18:26:50.717064    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Using UUID 1fab9bad-665a-46df-92ca-a82b5df1572f
	I0926 18:26:50.743178    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Generated MAC b2:85:21:94:97:dd
	I0926 18:26:50.743195    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-761000
	I0926 18:26:50.743231    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:26:50 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1fab9bad-665a-46df-92ca-a82b5df1572f", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0926 18:26:50.743262    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:26:50 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1fab9bad-665a-46df-92ca-a82b5df1572f", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0926 18:26:50.743301    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:26:50 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "1fab9bad-665a-46df-92ca-a82b5df1572f", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000/force-systemd-env-761000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-sys
temd-env-761000/bzimage,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-761000"}
	I0926 18:26:50.743342    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:26:50 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 1fab9bad-665a-46df-92ca-a82b5df1572f -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000/force-systemd-env-761000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000/console-ring -f kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000/bzimage,/Users/jenkins/minikube-integration/19
711-1128/.minikube/machines/force-systemd-env-761000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-761000"
	I0926 18:26:50.743354    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:26:50 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0926 18:26:50.746413    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:26:50 DEBUG: hyperkit: Pid is 6134
	I0926 18:26:50.746811    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 0
	I0926 18:26:50.746828    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:26:50.746922    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6134
	I0926 18:26:50.747888    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for b2:85:21:94:97:dd in /var/db/dhcpd_leases ...
	I0926 18:26:50.747948    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:26:50.747971    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:26:50.748001    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:26:50.748025    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:26:50.748051    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:26:50.748075    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:26:50.748091    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:26:50.748108    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:26:50.748121    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:26:50.748138    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:26:50.748173    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:26:50.748201    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:26:50.748211    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:26:50.748231    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:26:50.748248    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:26:50.748265    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:26:50.748277    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:26:50.748285    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:26:50.748291    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:26:50.754208    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:26:50 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I0926 18:26:50.762787    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:26:50 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0926 18:26:50.763441    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:26:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 18:26:50.763461    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:26:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 18:26:50.763480    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:26:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 18:26:50.763496    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:26:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 18:26:51.140169    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:26:51 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0926 18:26:51.140184    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:26:51 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0926 18:26:51.254788    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:26:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 18:26:51.254806    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:26:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 18:26:51.254820    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:26:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 18:26:51.254831    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:26:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 18:26:51.255696    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:26:51 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0926 18:26:51.255706    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:26:51 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0926 18:26:52.749027    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 1
	I0926 18:26:52.749042    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:26:52.749085    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6134
	I0926 18:26:52.749911    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for b2:85:21:94:97:dd in /var/db/dhcpd_leases ...
	I0926 18:26:52.749958    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:26:52.749972    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:26:52.749981    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:26:52.749989    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:26:52.750004    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:26:52.750018    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:26:52.750027    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:26:52.750045    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:26:52.750063    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:26:52.750075    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:26:52.750085    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:26:52.750094    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:26:52.750101    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:26:52.750112    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:26:52.750119    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:26:52.750127    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:26:52.750133    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:26:52.750142    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:26:52.750150    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:26:54.750504    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 2
	I0926 18:26:54.750520    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:26:54.750650    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6134
	I0926 18:26:54.751412    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for b2:85:21:94:97:dd in /var/db/dhcpd_leases ...
	I0926 18:26:54.751469    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:26:54.751480    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:26:54.751488    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:26:54.751494    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:26:54.751524    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:26:54.751548    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:26:54.751559    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:26:54.751570    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:26:54.751584    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:26:54.751592    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:26:54.751607    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:26:54.751617    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:26:54.751624    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:26:54.751631    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:26:54.751651    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:26:54.751659    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:26:54.751673    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:26:54.751684    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:26:54.751694    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:26:56.664015    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:26:56 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0926 18:26:56.664191    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:26:56 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0926 18:26:56.664203    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:26:56 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0926 18:26:56.684131    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:26:56 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0926 18:26:56.752976    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 3
	I0926 18:26:56.753036    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:26:56.753209    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6134
	I0926 18:26:56.754651    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for b2:85:21:94:97:dd in /var/db/dhcpd_leases ...
	I0926 18:26:56.754748    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:26:56.754763    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:26:56.754779    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:26:56.754789    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:26:56.754803    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:26:56.754817    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:26:56.754829    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:26:56.754862    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:26:56.754877    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:26:56.754929    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:26:56.754956    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:26:56.754973    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:26:56.754986    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:26:56.754995    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:26:56.755007    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:26:56.755016    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:26:56.755027    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:26:56.755037    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:26:56.755047    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:26:58.756095    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 4
	I0926 18:26:58.756112    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:26:58.756210    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6134
	I0926 18:26:58.756991    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for b2:85:21:94:97:dd in /var/db/dhcpd_leases ...
	I0926 18:26:58.757047    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:26:58.757057    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:26:58.757075    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:26:58.757086    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:26:58.757098    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:26:58.757105    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:26:58.757111    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:26:58.757123    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:26:58.757131    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:26:58.757136    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:26:58.757142    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:26:58.757151    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:26:58.757159    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:26:58.757168    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:26:58.757176    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:26:58.757183    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:26:58.757191    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:26:58.757211    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:26:58.757219    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:27:00.759242    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 5
	I0926 18:27:00.759256    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:27:00.759341    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6134
	I0926 18:27:00.760443    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for b2:85:21:94:97:dd in /var/db/dhcpd_leases ...
	I0926 18:27:00.760497    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:27:00.760508    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:27:00.760520    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:27:00.760530    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:27:00.760537    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:27:00.760543    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:27:00.760556    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:27:00.760568    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:27:00.760576    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:27:00.760583    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:27:00.760600    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:27:00.760608    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:27:00.760615    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:27:00.760622    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:27:00.760636    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:27:00.760646    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:27:00.760655    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:27:00.760663    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:27:00.760675    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:27:02.762694    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 6
	I0926 18:27:02.762709    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:27:02.762766    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6134
	I0926 18:27:02.763579    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for b2:85:21:94:97:dd in /var/db/dhcpd_leases ...
	I0926 18:27:02.763622    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:27:02.763630    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:27:02.763639    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:27:02.763647    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:27:02.763654    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:27:02.763666    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:27:02.763673    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:27:02.763682    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:27:02.763690    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:27:02.763711    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:27:02.763717    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:27:02.763731    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:27:02.763740    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:27:02.763748    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:27:02.763756    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:27:02.763762    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:27:02.763770    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:27:02.763783    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:27:02.763791    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:27:04.765921    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 7
	I0926 18:27:04.765943    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:27:04.765954    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6134
	I0926 18:27:04.766802    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for b2:85:21:94:97:dd in /var/db/dhcpd_leases ...
	I0926 18:27:04.766851    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:27:04.766871    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:27:04.766894    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:27:04.766912    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:27:04.766921    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:27:04.766927    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:27:04.766934    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:27:04.766945    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:27:04.766955    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:27:04.766962    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:27:04.766972    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:27:04.766984    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:27:04.766993    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:27:04.767000    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:27:04.767005    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:27:04.767023    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:27:04.767034    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:27:04.767056    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:27:04.767067    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:27:06.768279    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 8
	I0926 18:27:06.768295    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:27:06.768376    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6134
	I0926 18:27:06.769154    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for b2:85:21:94:97:dd in /var/db/dhcpd_leases ...
	I0926 18:27:06.769198    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:27:06.769208    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:27:06.769218    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:27:06.769224    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:27:06.769241    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:27:06.769256    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:27:06.769265    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:27:06.769276    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:27:06.769289    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:27:06.769297    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:27:06.769307    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:27:06.769320    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:27:06.769327    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:27:06.769334    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:27:06.769353    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:27:06.769361    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:27:06.769368    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:27:06.769375    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:27:06.769383    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:27:08.770830    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 9
	I0926 18:27:08.770855    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:27:08.770911    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6134
	I0926 18:27:08.771742    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for b2:85:21:94:97:dd in /var/db/dhcpd_leases ...
	I0926 18:27:08.771792    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:27:08.771804    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:27:08.771811    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:27:08.771816    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:27:08.771826    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:27:08.771843    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:27:08.771855    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:27:08.771862    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:27:08.771869    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:27:08.771876    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:27:08.771883    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:27:08.771897    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:27:08.771908    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:27:08.771915    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:27:08.771920    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:27:08.771935    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:27:08.771947    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:27:08.771963    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:27:08.771975    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:27:10.773873    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 10
	I0926 18:27:10.773884    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:27:10.773999    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6134
	I0926 18:27:10.774789    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for b2:85:21:94:97:dd in /var/db/dhcpd_leases ...
	I0926 18:27:10.774829    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:27:10.774844    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:27:10.774876    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:27:10.774890    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:27:10.774899    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:27:10.774911    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:27:10.774918    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:27:10.774925    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:27:10.774932    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:27:10.774947    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:27:10.774966    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:27:10.774977    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:27:10.774989    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:27:10.774999    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:27:10.775018    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:27:10.775025    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:27:10.775032    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:27:10.775038    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:27:10.775046    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:27:12.775443    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 11
	I0926 18:27:12.775457    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:27:12.775527    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6134
	I0926 18:27:12.776346    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for b2:85:21:94:97:dd in /var/db/dhcpd_leases ...
	I0926 18:27:12.776406    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:27:12.776420    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:27:12.776432    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:27:12.776446    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:27:12.776452    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:27:12.776458    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:27:12.776464    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:27:12.776490    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:27:12.776502    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:27:12.776523    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:27:12.776532    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:27:12.776539    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:27:12.776545    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:27:12.776552    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:27:12.776558    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:27:12.776570    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:27:12.776582    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:27:12.776597    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:27:12.776609    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:27:14.778571    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 12
	I0926 18:27:14.778584    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:27:14.778660    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6134
	I0926 18:27:14.779498    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for b2:85:21:94:97:dd in /var/db/dhcpd_leases ...
	I0926 18:27:14.779540    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:27:14.779551    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:27:14.779558    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:27:14.779563    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:27:14.779586    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:27:14.779608    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:27:14.779618    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:27:14.779626    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:27:14.779635    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:27:14.779645    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:27:14.779653    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:27:14.779670    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:27:14.779681    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:27:14.779691    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:27:14.779701    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:27:14.779708    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:27:14.779720    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:27:14.779728    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:27:14.779737    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:27:16.781758    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 13
	I0926 18:27:16.781771    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:27:16.781827    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6134
	I0926 18:27:16.782641    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for b2:85:21:94:97:dd in /var/db/dhcpd_leases ...
	I0926 18:27:16.782661    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:27:16.782670    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:27:16.782680    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:27:16.782689    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:27:16.782697    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:27:16.782710    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:27:16.782717    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:27:16.782725    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:27:16.782731    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:27:16.782736    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:27:16.782743    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:27:16.782749    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:27:16.782755    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:27:16.782760    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:27:16.782768    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:27:16.782775    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:27:16.782781    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:27:16.782787    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:27:16.782794    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:27:18.783320    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 14
	I0926 18:27:18.783335    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:27:18.783400    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6134
	I0926 18:27:18.784209    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for b2:85:21:94:97:dd in /var/db/dhcpd_leases ...
	I0926 18:27:18.784258    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:27:18.784268    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:27:18.784283    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:27:18.784290    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:27:18.784307    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:27:18.784314    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:27:18.784321    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:27:18.784329    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:27:18.784336    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:27:18.784345    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:27:18.784361    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:27:18.784373    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:27:18.784379    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:27:18.784387    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:27:18.784397    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:27:18.784404    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:27:18.784411    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:27:18.784419    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:27:18.784435    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:27:20.785115    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 15
	I0926 18:27:20.785126    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:27:20.785173    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6134
	I0926 18:27:20.785940    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for b2:85:21:94:97:dd in /var/db/dhcpd_leases ...
	I0926 18:27:20.786007    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:27:20.786017    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:27:20.786026    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:27:20.786034    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:27:20.786041    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:27:20.786047    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:27:20.786054    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:27:20.786061    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:27:20.786073    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:27:20.786084    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:27:20.786091    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:27:20.786099    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:27:20.786105    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:27:20.786115    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:27:20.786133    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:27:20.786144    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:27:20.786152    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:27:20.786160    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:27:20.786169    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:27:22.786583    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 16
	I0926 18:27:22.786595    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:27:22.786669    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6134
	I0926 18:27:22.787470    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for b2:85:21:94:97:dd in /var/db/dhcpd_leases ...
	I0926 18:27:22.787511    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:27:22.787518    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:27:22.787528    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:27:22.787538    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:27:22.787549    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:27:22.787571    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:27:22.787583    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:27:22.787592    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:27:22.787598    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:27:22.787606    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:27:22.787615    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:27:22.787621    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:27:22.787638    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:27:22.787645    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:27:22.787651    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:27:22.787659    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:27:22.787665    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:27:22.787671    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:27:22.787677    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:27:24.788673    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 17
	I0926 18:27:24.788687    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:27:24.788756    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6134
	I0926 18:27:24.789551    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for b2:85:21:94:97:dd in /var/db/dhcpd_leases ...
	I0926 18:27:24.789609    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:27:24.789620    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:27:24.789628    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:27:24.789635    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:27:24.789641    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:27:24.789647    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:27:24.789653    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:27:24.789659    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:27:24.789666    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:27:24.789675    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:27:24.789691    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:27:24.789703    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:27:24.789711    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:27:24.789719    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:27:24.789731    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:27:24.789740    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:27:24.789747    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:27:24.789754    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:27:24.789768    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:27:26.791821    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 18
	I0926 18:27:26.791835    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:27:26.791903    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6134
	I0926 18:27:26.792742    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for b2:85:21:94:97:dd in /var/db/dhcpd_leases ...
	I0926 18:27:26.792789    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:27:26.792809    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:27:26.792831    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:27:26.792841    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:27:26.792849    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:27:26.792858    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:27:26.792874    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:27:26.792886    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:27:26.792897    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:27:26.792905    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:27:26.792927    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:27:26.792944    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:27:26.792954    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:27:26.792962    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:27:26.792976    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:27:26.792988    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:27:26.793016    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:27:26.793028    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:27:26.793037    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:27:28.794970    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 19
	I0926 18:27:28.794984    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:27:28.795055    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6134
	I0926 18:27:28.795781    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for b2:85:21:94:97:dd in /var/db/dhcpd_leases ...
	I0926 18:27:28.795843    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:27:28.795851    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:27:28.795858    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:27:28.795865    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:27:28.795872    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:27:28.795880    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:27:28.795893    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:27:28.795901    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:27:28.795918    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:27:28.795932    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:27:28.795941    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:27:28.795949    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:27:28.795964    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:27:28.795976    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:27:28.795983    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:27:28.795990    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:27:28.796002    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:27:28.796014    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:27:28.796024    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:27:30.796580    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 20
	I0926 18:27:30.796593    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:27:30.796675    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6134
	I0926 18:27:30.797733    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for b2:85:21:94:97:dd in /var/db/dhcpd_leases ...
	I0926 18:27:30.797774    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:27:30.797783    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:27:30.797792    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:27:30.797801    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:27:30.797808    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:27:30.797813    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:27:30.797819    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:27:30.797825    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:27:30.797831    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:27:30.797840    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:27:30.797859    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:27:30.797871    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:27:30.797878    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:27:30.797886    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:27:30.797900    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:27:30.797912    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:27:30.797924    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:27:30.797931    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:27:30.797939    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:27:32.799979    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 21
	I0926 18:27:32.799993    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:27:32.800051    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6134
	I0926 18:27:32.800972    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for b2:85:21:94:97:dd in /var/db/dhcpd_leases ...
	I0926 18:27:32.800984    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:27:32.801005    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:27:32.801014    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:27:32.801021    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:27:32.801029    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:27:32.801037    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:27:32.801043    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:27:32.801057    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:27:32.801067    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:27:32.801074    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:27:32.801083    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:27:32.801089    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:27:32.801095    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:27:32.801101    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:27:32.801107    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:27:32.801134    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:27:32.801150    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:27:32.801169    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:27:32.801186    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:27:34.801566    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 22
	I0926 18:27:34.801579    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:27:34.801656    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6134
	I0926 18:27:34.802526    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for b2:85:21:94:97:dd in /var/db/dhcpd_leases ...
	I0926 18:27:34.802583    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:27:34.802593    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:27:34.802606    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:27:34.802614    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:27:34.802630    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:27:34.802640    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:27:34.802646    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:27:34.802653    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:27:34.802661    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:27:34.802672    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:27:34.802682    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:27:34.802691    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:27:34.802700    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:27:34.802707    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:27:34.802714    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:27:34.802721    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:27:34.802727    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:27:34.802733    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:27:34.802741    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:27:36.802953    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 23
	I0926 18:27:36.802969    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:27:36.803039    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6134
	I0926 18:27:36.803840    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for b2:85:21:94:97:dd in /var/db/dhcpd_leases ...
	I0926 18:27:36.803898    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:27:36.803910    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:27:36.803938    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:27:36.803953    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:27:36.803961    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:27:36.803971    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:27:36.803979    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:27:36.803987    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:27:36.803994    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:27:36.804002    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:27:36.804008    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:27:36.804015    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:27:36.804022    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:27:36.804030    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:27:36.804039    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:27:36.804047    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:27:36.804060    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:27:36.804067    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:27:36.804074    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:27:38.804634    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 24
	I0926 18:27:38.804647    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:27:38.804739    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6134
	I0926 18:27:38.805512    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for b2:85:21:94:97:dd in /var/db/dhcpd_leases ...
	I0926 18:27:38.805575    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:27:38.805592    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:27:38.805609    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:27:38.805619    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:27:38.805627    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:27:38.805633    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:27:38.805638    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:27:38.805654    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:27:38.805664    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:27:38.805672    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:27:38.805679    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:27:38.805685    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:27:38.805691    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:27:38.805702    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:27:38.805715    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:27:38.805722    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:27:38.805727    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:27:38.805740    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:27:38.805746    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:27:40.807597    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 25
	I0926 18:27:40.807611    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:27:40.807665    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6134
	I0926 18:27:40.808615    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for b2:85:21:94:97:dd in /var/db/dhcpd_leases ...
	I0926 18:27:40.808664    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:27:40.808676    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:27:40.808684    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:27:40.808690    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:27:40.808696    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:27:40.808711    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:27:40.808726    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:27:40.808738    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:27:40.808746    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:27:40.808752    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:27:40.808761    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:27:40.808769    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:27:40.808775    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:27:40.808783    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:27:40.808790    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:27:40.808796    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:27:40.808802    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:27:40.808809    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:27:40.808817    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:27:42.809585    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 26
	I0926 18:27:42.809598    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:27:42.809639    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6134
	I0926 18:27:42.810605    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for b2:85:21:94:97:dd in /var/db/dhcpd_leases ...
	I0926 18:27:42.810658    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:27:42.810667    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:27:42.810674    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:27:42.810682    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:27:42.810693    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:27:42.810704    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:27:42.810711    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:27:42.810719    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:27:42.810727    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:27:42.810734    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:27:42.810742    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:27:42.810747    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:27:42.810770    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:27:42.810781    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:27:42.810799    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:27:42.810813    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:27:42.810821    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:27:42.810829    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:27:42.810855    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:27:44.811918    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 27
	I0926 18:27:44.811932    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:27:44.811982    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6134
	I0926 18:27:44.812813    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for b2:85:21:94:97:dd in /var/db/dhcpd_leases ...
	I0926 18:27:44.812861    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:27:44.812870    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:27:44.812884    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:27:44.812897    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:27:44.812912    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:27:44.812920    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:27:44.812927    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:27:44.812933    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:27:44.812954    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:27:44.812965    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:27:44.812973    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:27:44.812984    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:27:44.812992    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:27:44.812999    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:27:44.813006    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:27:44.813013    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:27:44.813028    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:27:44.813041    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:27:44.813052    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:27:46.815108    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 28
	I0926 18:27:46.815125    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:27:46.815138    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6134
	I0926 18:27:46.815950    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for b2:85:21:94:97:dd in /var/db/dhcpd_leases ...
	I0926 18:27:46.815986    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:27:46.816000    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:27:46.816015    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:27:46.816030    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:27:46.816047    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:27:46.816074    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:27:46.816087    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:27:46.816095    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:27:46.816102    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:27:46.816108    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:27:46.816123    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:27:46.816138    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:27:46.816146    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:27:46.816153    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:27:46.816162    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:27:46.816170    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:27:46.816176    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:27:46.816184    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:27:46.816192    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:27:48.816192    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 29
	I0926 18:27:48.816220    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:27:48.816306    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6134
	I0926 18:27:48.817113    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for b2:85:21:94:97:dd in /var/db/dhcpd_leases ...
	I0926 18:27:48.817182    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:27:48.817191    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:27:48.817214    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:27:48.817236    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:27:48.817247    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:27:48.817253    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:27:48.817266    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:27:48.817279    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:27:48.817288    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:27:48.817297    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:27:48.817304    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:27:48.817309    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:27:48.817328    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:27:48.817339    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:27:48.817351    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:27:48.817365    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:27:48.817388    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:27:48.817412    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:27:48.817427    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:27:50.818610    6117 client.go:171] duration metric: took 1m0.786417821s to LocalClient.Create
	I0926 18:27:52.819312    6117 start.go:128] duration metric: took 1m2.818740067s to createHost
	I0926 18:27:52.819325    6117 start.go:83] releasing machines lock for "force-systemd-env-761000", held for 1m2.818867067s
	W0926 18:27:52.819354    6117 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for b2:85:21:94:97:dd
	I0926 18:27:52.819724    6117 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 18:27:52.819748    6117 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 18:27:52.828735    6117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53828
	I0926 18:27:52.829139    6117 main.go:141] libmachine: () Calling .GetVersion
	I0926 18:27:52.829590    6117 main.go:141] libmachine: Using API Version  1
	I0926 18:27:52.829639    6117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 18:27:52.829940    6117 main.go:141] libmachine: () Calling .GetMachineName
	I0926 18:27:52.830394    6117 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 18:27:52.830418    6117 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 18:27:52.839062    6117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53830
	I0926 18:27:52.839511    6117 main.go:141] libmachine: () Calling .GetVersion
	I0926 18:27:52.839965    6117 main.go:141] libmachine: Using API Version  1
	I0926 18:27:52.840004    6117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 18:27:52.840247    6117 main.go:141] libmachine: () Calling .GetMachineName
	I0926 18:27:52.840366    6117 main.go:141] libmachine: (force-systemd-env-761000) Calling .GetState
	I0926 18:27:52.840469    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:27:52.840557    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6134
	I0926 18:27:52.841588    6117 main.go:141] libmachine: (force-systemd-env-761000) Calling .DriverName
	I0926 18:27:52.862674    6117 out.go:177] * Deleting "force-systemd-env-761000" in hyperkit ...
	I0926 18:27:52.903589    6117 main.go:141] libmachine: (force-systemd-env-761000) Calling .Remove
	I0926 18:27:52.903710    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:27:52.903718    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:27:52.903791    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6134
	I0926 18:27:52.904744    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:27:52.904796    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | waiting for graceful shutdown
	I0926 18:27:53.906132    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:27:53.906223    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6134
	I0926 18:27:53.907167    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | waiting for graceful shutdown
	I0926 18:27:54.908287    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:27:54.908382    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6134
	I0926 18:27:54.910160    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | waiting for graceful shutdown
	I0926 18:27:55.910650    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:27:55.910757    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6134
	I0926 18:27:55.911689    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | waiting for graceful shutdown
	I0926 18:27:56.912332    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:27:56.912425    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6134
	I0926 18:27:56.913114    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | waiting for graceful shutdown
	I0926 18:27:57.915225    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:27:57.915287    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6134
	I0926 18:27:57.916183    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | sending sigkill
	I0926 18:27:57.916193    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	W0926 18:27:57.927944    6117 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for b2:85:21:94:97:dd
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for b2:85:21:94:97:dd
	I0926 18:27:57.927966    6117 start.go:729] Will try again in 5 seconds ...
	I0926 18:27:57.937140    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:27:57 WARN : hyperkit: failed to read stdout: EOF
	I0926 18:27:57.937158    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:27:57 WARN : hyperkit: failed to read stderr: EOF
	I0926 18:28:02.928192    6117 start.go:360] acquireMachinesLock for force-systemd-env-761000: {Name:mk62b0ec9b6170d1971239573141a625c4a2621e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:28:55.831357    6117 start.go:364] duration metric: took 52.902641856s to acquireMachinesLock for "force-systemd-env-761000"
	I0926 18:28:55.831390    6117 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-761000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-761000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 18:28:55.831443    6117 start.go:125] createHost starting for "" (driver="hyperkit")
	I0926 18:28:55.873769    6117 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0926 18:28:55.873923    6117 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 18:28:55.873945    6117 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 18:28:55.882998    6117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53834
	I0926 18:28:55.883509    6117 main.go:141] libmachine: () Calling .GetVersion
	I0926 18:28:55.883887    6117 main.go:141] libmachine: Using API Version  1
	I0926 18:28:55.883905    6117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 18:28:55.884139    6117 main.go:141] libmachine: () Calling .GetMachineName
	I0926 18:28:55.884249    6117 main.go:141] libmachine: (force-systemd-env-761000) Calling .GetMachineName
	I0926 18:28:55.884346    6117 main.go:141] libmachine: (force-systemd-env-761000) Calling .DriverName
	I0926 18:28:55.884455    6117 start.go:159] libmachine.API.Create for "force-systemd-env-761000" (driver="hyperkit")
	I0926 18:28:55.884476    6117 client.go:168] LocalClient.Create starting
	I0926 18:28:55.884498    6117 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem
	I0926 18:28:55.884552    6117 main.go:141] libmachine: Decoding PEM data...
	I0926 18:28:55.884563    6117 main.go:141] libmachine: Parsing certificate...
	I0926 18:28:55.884607    6117 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem
	I0926 18:28:55.884643    6117 main.go:141] libmachine: Decoding PEM data...
	I0926 18:28:55.884653    6117 main.go:141] libmachine: Parsing certificate...
	I0926 18:28:55.884666    6117 main.go:141] libmachine: Running pre-create checks...
	I0926 18:28:55.884671    6117 main.go:141] libmachine: (force-systemd-env-761000) Calling .PreCreateCheck
	I0926 18:28:55.884751    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:28:55.884780    6117 main.go:141] libmachine: (force-systemd-env-761000) Calling .GetConfigRaw
	I0926 18:28:55.894873    6117 main.go:141] libmachine: Creating machine...
	I0926 18:28:55.894880    6117 main.go:141] libmachine: (force-systemd-env-761000) Calling .Create
	I0926 18:28:55.894977    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:28:55.895096    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | I0926 18:28:55.894969    6171 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19711-1128/.minikube
	I0926 18:28:55.895152    6117 main.go:141] libmachine: (force-systemd-env-761000) Downloading /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1128/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0926 18:28:56.239215    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | I0926 18:28:56.239114    6171 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000/id_rsa...
	I0926 18:28:56.350757    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | I0926 18:28:56.350675    6171 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000/force-systemd-env-761000.rawdisk...
	I0926 18:28:56.350770    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Writing magic tar header
	I0926 18:28:56.350779    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Writing SSH key tar header
	I0926 18:28:56.351143    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | I0926 18:28:56.351107    6171 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000 ...
	I0926 18:28:56.715332    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:28:56.715352    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000/hyperkit.pid
	I0926 18:28:56.715363    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Using UUID 45e086fc-36bd-43f1-bbea-ae4e0fbc3296
	I0926 18:28:56.740602    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Generated MAC aa:ee:1c:8f:8e:9e
	I0926 18:28:56.740618    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-761000
	I0926 18:28:56.740651    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:28:56 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"45e086fc-36bd-43f1-bbea-ae4e0fbc3296", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0926 18:28:56.740676    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:28:56 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"45e086fc-36bd-43f1-bbea-ae4e0fbc3296", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0926 18:28:56.740715    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:28:56 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "45e086fc-36bd-43f1-bbea-ae4e0fbc3296", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000/force-systemd-env-761000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-sys
temd-env-761000/bzimage,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-761000"}
	I0926 18:28:56.740748    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:28:56 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 45e086fc-36bd-43f1-bbea-ae4e0fbc3296 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000/force-systemd-env-761000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000/console-ring -f kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000/bzimage,/Users/jenkins/minikube-integration/19
711-1128/.minikube/machines/force-systemd-env-761000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-761000"
	I0926 18:28:56.740758    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:28:56 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0926 18:28:56.743662    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:28:56 DEBUG: hyperkit: Pid is 6181
	I0926 18:28:56.744800    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 0
	I0926 18:28:56.744817    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:28:56.744880    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6181
	I0926 18:28:56.745798    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for aa:ee:1c:8f:8e:9e in /var/db/dhcpd_leases ...
	I0926 18:28:56.745864    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:28:56.745882    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:28:56.745927    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:28:56.745950    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:28:56.745963    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:28:56.745976    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:28:56.745986    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:28:56.745993    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:28:56.745999    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:28:56.746004    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:28:56.746019    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:28:56.746032    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:28:56.746050    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:28:56.746067    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:28:56.746075    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:28:56.746083    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:28:56.746095    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:28:56.746103    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:28:56.746110    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:28:56.751640    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:28:56 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I0926 18:28:56.759737    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:28:56 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/force-systemd-env-761000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0926 18:28:56.760619    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:28:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 18:28:56.760635    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:28:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 18:28:56.760643    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:28:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 18:28:56.760673    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:28:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 18:28:57.136110    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:28:57 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0926 18:28:57.136124    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:28:57 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0926 18:28:57.250844    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:28:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 18:28:57.250866    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:28:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 18:28:57.250879    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:28:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 18:28:57.250888    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:28:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 18:28:57.251738    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:28:57 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0926 18:28:57.251747    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:28:57 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0926 18:28:58.747672    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 1
	I0926 18:28:58.747687    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:28:58.747756    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6181
	I0926 18:28:58.748637    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for aa:ee:1c:8f:8e:9e in /var/db/dhcpd_leases ...
	I0926 18:28:58.748689    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:28:58.748699    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:28:58.748713    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:28:58.748719    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:28:58.748736    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:28:58.748743    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:28:58.748749    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:28:58.748756    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:28:58.748762    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:28:58.748769    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:28:58.748783    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:28:58.748796    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:28:58.748812    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:28:58.748819    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:28:58.748826    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:28:58.748834    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:28:58.748843    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:28:58.748852    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:28:58.748862    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:29:00.748971    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 2
	I0926 18:29:00.748984    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:29:00.749115    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6181
	I0926 18:29:00.749909    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for aa:ee:1c:8f:8e:9e in /var/db/dhcpd_leases ...
	I0926 18:29:00.749956    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:29:00.749969    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:29:00.749978    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:29:00.749984    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:29:00.749990    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:29:00.749999    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:29:00.750013    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:29:00.750021    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:29:00.750029    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:29:00.750037    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:29:00.750052    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:29:00.750066    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:29:00.750084    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:29:00.750096    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:29:00.750117    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:29:00.750130    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:29:00.750141    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:29:00.750150    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:29:00.750158    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:29:02.672250    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:29:02 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0926 18:29:02.672386    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:29:02 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0926 18:29:02.672394    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:29:02 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0926 18:29:02.692709    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | 2024/09/26 18:29:02 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0926 18:29:02.752288    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 3
	I0926 18:29:02.752315    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:29:02.752566    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6181
	I0926 18:29:02.754097    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for aa:ee:1c:8f:8e:9e in /var/db/dhcpd_leases ...
	I0926 18:29:02.754252    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:29:02.754269    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:29:02.754279    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:29:02.754290    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:29:02.754306    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:29:02.754318    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:29:02.754329    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:29:02.754340    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:29:02.754349    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:29:02.754357    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:29:02.754366    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:29:02.754376    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:29:02.754385    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:29:02.754395    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:29:02.754404    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:29:02.754413    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:29:02.754422    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:29:02.754432    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:29:02.754470    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:29:04.754680    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 4
	I0926 18:29:04.754700    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:29:04.754790    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6181
	I0926 18:29:04.755664    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for aa:ee:1c:8f:8e:9e in /var/db/dhcpd_leases ...
	I0926 18:29:04.755704    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:29:04.755717    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:29:04.755734    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:29:04.755742    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:29:04.755749    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:29:04.755759    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:29:04.755776    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:29:04.755789    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:29:04.755806    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:29:04.755816    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:29:04.755824    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:29:04.755838    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:29:04.755845    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:29:04.755853    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:29:04.755859    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:29:04.755867    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:29:04.755874    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:29:04.755882    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:29:04.755890    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:29:06.756136    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 5
	I0926 18:29:06.756151    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:29:06.756216    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6181
	I0926 18:29:06.757014    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for aa:ee:1c:8f:8e:9e in /var/db/dhcpd_leases ...
	I0926 18:29:06.757063    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:29:06.757070    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:29:06.757091    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:29:06.757101    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:29:06.757110    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:29:06.757119    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:29:06.757126    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:29:06.757139    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:29:06.757150    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:29:06.757160    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:29:06.757166    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:29:06.757172    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:29:06.757180    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:29:06.757195    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:29:06.757208    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:29:06.757223    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:29:06.757231    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:29:06.757238    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:29:06.757247    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:29:08.759216    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 6
	I0926 18:29:08.759228    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:29:08.759360    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6181
	I0926 18:29:08.760373    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for aa:ee:1c:8f:8e:9e in /var/db/dhcpd_leases ...
	I0926 18:29:08.760399    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:29:08.760411    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:29:08.760427    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:29:08.760436    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:29:08.760442    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:29:08.760448    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:29:08.760454    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:29:08.760460    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:29:08.760484    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:29:08.760502    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:29:08.760514    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:29:08.760523    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:29:08.760532    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:29:08.760546    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:29:08.760566    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:29:08.760573    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:29:08.760583    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:29:08.760590    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:29:08.760596    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:29:10.762611    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 7
	I0926 18:29:10.762625    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:29:10.762675    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6181
	I0926 18:29:10.763547    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for aa:ee:1c:8f:8e:9e in /var/db/dhcpd_leases ...
	I0926 18:29:10.763588    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:29:10.763598    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:29:10.763615    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:29:10.763624    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:29:10.763631    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:29:10.763637    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:29:10.763676    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:29:10.763685    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:29:10.763692    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:29:10.763698    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:29:10.763707    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:29:10.763715    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:29:10.763729    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:29:10.763737    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:29:10.763752    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:29:10.763764    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:29:10.763772    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:29:10.763779    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:29:10.763795    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:29:12.764006    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 8
	I0926 18:29:12.764021    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:29:12.764139    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6181
	I0926 18:29:12.764918    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for aa:ee:1c:8f:8e:9e in /var/db/dhcpd_leases ...
	I0926 18:29:12.764967    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:29:12.764976    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:29:12.764985    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:29:12.765007    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:29:12.765013    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:29:12.765021    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:29:12.765030    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:29:12.765036    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:29:12.765044    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:29:12.765051    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:29:12.765058    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:29:12.765074    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:29:12.765086    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:29:12.765109    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:29:12.765121    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:29:12.765128    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:29:12.765138    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:29:12.765153    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:29:12.765161    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:29:14.766100    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 9
	I0926 18:29:14.766114    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:29:14.766164    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6181
	I0926 18:29:14.767221    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for aa:ee:1c:8f:8e:9e in /var/db/dhcpd_leases ...
	I0926 18:29:14.767259    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:29:14.767268    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:29:14.767296    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:29:14.767307    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:29:14.767316    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:29:14.767322    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:29:14.767329    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:29:14.767337    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:29:14.767356    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:29:14.767367    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:29:14.767375    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:29:14.767383    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:29:14.767389    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:29:14.767397    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:29:14.767403    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:29:14.767409    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:29:14.767423    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:29:14.767434    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:29:14.767443    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:29:16.769475    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 10
	I0926 18:29:16.769486    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:29:16.769545    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6181
	I0926 18:29:16.770351    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for aa:ee:1c:8f:8e:9e in /var/db/dhcpd_leases ...
	I0926 18:29:16.770392    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:29:16.770401    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:29:16.770410    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:29:16.770415    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:29:16.770451    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:29:16.770464    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:29:16.770477    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:29:16.770485    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:29:16.770490    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:29:16.770495    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:29:16.770503    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:29:16.770512    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:29:16.770518    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:29:16.770524    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:29:16.770532    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:29:16.770539    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:29:16.770549    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:29:16.770558    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:29:16.770565    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:29:18.770725    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 11
	I0926 18:29:18.770738    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:29:18.770801    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6181
	I0926 18:29:18.771831    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for aa:ee:1c:8f:8e:9e in /var/db/dhcpd_leases ...
	I0926 18:29:18.771872    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:29:18.771880    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:29:18.771894    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:29:18.771900    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:29:18.771909    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:29:18.771917    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:29:18.771924    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:29:18.771932    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:29:18.771947    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:29:18.771962    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:29:18.771970    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:29:18.771977    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:29:18.771984    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:29:18.771991    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:29:18.771998    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:29:18.772004    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:29:18.772009    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:29:18.772017    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:29:18.772026    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:29:20.774075    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 12
	I0926 18:29:20.774112    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:29:20.774177    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6181
	I0926 18:29:20.775013    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for aa:ee:1c:8f:8e:9e in /var/db/dhcpd_leases ...
	I0926 18:29:20.775035    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:29:20.775048    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:29:20.775061    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:29:20.775075    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:29:20.775086    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:29:20.775096    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:29:20.775103    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:29:20.775111    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:29:20.775117    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:29:20.775125    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:29:20.775132    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:29:20.775139    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:29:20.775155    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:29:20.775166    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:29:20.775174    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:29:20.775182    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:29:20.775191    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:29:20.775205    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:29:20.775222    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:29:22.777213    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 13
	I0926 18:29:22.777228    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:29:22.777244    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6181
	I0926 18:29:22.778050    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for aa:ee:1c:8f:8e:9e in /var/db/dhcpd_leases ...
	I0926 18:29:22.778063    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:29:22.778085    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:29:22.778094    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:29:22.778101    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:29:22.778115    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:29:22.778127    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:29:22.778133    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:29:22.778139    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:29:22.778147    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:29:22.778160    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:29:22.778171    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:29:22.778195    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:29:22.778208    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:29:22.778217    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:29:22.778225    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:29:22.778232    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:29:22.778240    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:29:22.778250    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:29:22.778258    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:29:24.778745    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 14
	I0926 18:29:24.778758    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:29:24.778833    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6181
	I0926 18:29:24.779642    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for aa:ee:1c:8f:8e:9e in /var/db/dhcpd_leases ...
	I0926 18:29:24.779694    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:29:24.779704    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:29:24.779720    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:29:24.779730    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:29:24.779737    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:29:24.779747    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:29:24.779755    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:29:24.779760    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:29:24.779777    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:29:24.779786    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:29:24.779794    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:29:24.779800    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:29:24.779808    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:29:24.779822    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:29:24.779830    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:29:24.779836    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:29:24.779842    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:29:24.779847    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:29:24.779858    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:29:26.781871    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 15
	I0926 18:29:26.781882    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:29:26.781976    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6181
	I0926 18:29:26.783035    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for aa:ee:1c:8f:8e:9e in /var/db/dhcpd_leases ...
	I0926 18:29:26.783090    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:29:26.783098    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:29:26.783106    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:29:26.783120    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:29:26.783127    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:29:26.783136    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:29:26.783151    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:29:26.783175    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:29:26.783187    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:29:26.783195    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:29:26.783202    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:29:26.783211    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:29:26.783219    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:29:26.783226    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:29:26.783234    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:29:26.783245    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:29:26.783253    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:29:26.783263    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:29:26.783272    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:29:28.785299    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 16
	I0926 18:29:28.785311    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:29:28.785437    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6181
	I0926 18:29:28.786264    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for aa:ee:1c:8f:8e:9e in /var/db/dhcpd_leases ...
	I0926 18:29:28.786314    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:29:28.786323    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:29:28.786332    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:29:28.786372    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:29:28.786382    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:29:28.786398    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:29:28.786407    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:29:28.786414    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:29:28.786420    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:29:28.786430    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:29:28.786439    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:29:28.786447    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:29:28.786456    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:29:28.786463    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:29:28.786483    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:29:28.786496    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:29:28.786504    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:29:28.786511    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:29:28.786520    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:29:30.788534    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 17
	I0926 18:29:30.788547    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:29:30.788588    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6181
	I0926 18:29:30.789535    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for aa:ee:1c:8f:8e:9e in /var/db/dhcpd_leases ...
	I0926 18:29:30.789594    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:29:30.789605    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:29:30.789618    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:29:30.789628    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:29:30.789637    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:29:30.789642    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:29:30.789649    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:29:30.789657    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:29:30.789669    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:29:30.789677    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:29:30.789684    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:29:30.789692    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:29:30.789698    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:29:30.789707    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:29:30.789713    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:29:30.789721    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:29:30.789729    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:29:30.789740    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:29:30.789748    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:29:32.791792    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 18
	I0926 18:29:32.791805    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:29:32.791871    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6181
	I0926 18:29:32.792816    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for aa:ee:1c:8f:8e:9e in /var/db/dhcpd_leases ...
	I0926 18:29:32.792836    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:29:32.792852    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:29:32.792872    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:29:32.792880    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:29:32.792901    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:29:32.792912    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:29:32.792922    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:29:32.792929    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:29:32.792937    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:29:32.792944    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:29:32.792952    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:29:32.792966    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:29:32.792979    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:29:32.793003    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:29:32.793033    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:29:32.793039    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:29:32.793045    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:29:32.793053    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:29:32.793061    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:29:34.795055    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 19
	I0926 18:29:34.795070    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:29:34.795127    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6181
	I0926 18:29:34.796233    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for aa:ee:1c:8f:8e:9e in /var/db/dhcpd_leases ...
	I0926 18:29:34.796293    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:29:34.796303    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:29:34.796321    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:29:34.796332    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:29:34.796344    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:29:34.796355    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:29:34.796370    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:29:34.796382    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:29:34.796390    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:29:34.796397    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:29:34.796402    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:29:34.796409    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:29:34.796415    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:29:34.796420    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:29:34.796426    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:29:34.796433    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:29:34.796439    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:29:34.796445    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:29:34.796453    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:29:36.797811    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 20
	I0926 18:29:36.797823    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:29:36.797898    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6181
	I0926 18:29:36.798923    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for aa:ee:1c:8f:8e:9e in /var/db/dhcpd_leases ...
	I0926 18:29:36.798973    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:29:36.798983    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:29:36.798992    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:29:36.799000    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:29:36.799008    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:29:36.799017    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:29:36.799023    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:29:36.799031    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:29:36.799037    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:29:36.799045    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:29:36.799050    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:29:36.799064    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:29:36.799077    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:29:36.799084    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:29:36.799092    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:29:36.799107    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:29:36.799119    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:29:36.799127    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:29:36.799135    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:29:38.801202    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 21
	I0926 18:29:38.801216    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:29:38.801278    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6181
	I0926 18:29:38.802165    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for aa:ee:1c:8f:8e:9e in /var/db/dhcpd_leases ...
	I0926 18:29:38.802207    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:29:38.802216    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:29:38.802225    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:29:38.802231    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:29:38.802237    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:29:38.802243    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:29:38.802256    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:29:38.802271    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:29:38.802279    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:29:38.802286    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:29:38.802296    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:29:38.802304    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:29:38.802314    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:29:38.802321    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:29:38.802337    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:29:38.802350    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:29:38.802358    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:29:38.802365    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:29:38.802373    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:29:40.804301    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 22
	I0926 18:29:40.804312    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:29:40.804390    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6181
	I0926 18:29:40.805254    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for aa:ee:1c:8f:8e:9e in /var/db/dhcpd_leases ...
	I0926 18:29:40.805279    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:29:40.805296    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:29:40.805305    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:29:40.805312    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:29:40.805325    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:29:40.805339    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:29:40.805357    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:29:40.805369    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:29:40.805376    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:29:40.805383    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:29:40.805389    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:29:40.805396    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:29:40.805401    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:29:40.805408    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:29:40.805414    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:29:40.805437    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:29:40.805452    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:29:40.805460    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:29:40.805468    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:29:42.806320    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 23
	I0926 18:29:42.806332    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:29:42.806373    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6181
	I0926 18:29:42.807329    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for aa:ee:1c:8f:8e:9e in /var/db/dhcpd_leases ...
	I0926 18:29:42.807379    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:29:42.807390    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:29:42.807400    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:29:42.807406    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:29:42.807417    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:29:42.807426    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:29:42.807432    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:29:42.807441    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:29:42.807453    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:29:42.807459    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:29:42.807466    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:29:42.807478    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:29:42.807490    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:29:42.807497    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:29:42.807512    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:29:42.807525    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:29:42.807536    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:29:42.807543    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:29:42.807551    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:29:44.809127    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 24
	I0926 18:29:44.809139    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:29:44.809214    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6181
	I0926 18:29:44.810144    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for aa:ee:1c:8f:8e:9e in /var/db/dhcpd_leases ...
	I0926 18:29:44.810194    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:29:44.810204    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:29:44.810215    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:29:44.810228    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:29:44.810236    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:29:44.810242    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:29:44.810247    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:29:44.810253    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:29:44.810260    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:29:44.810268    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:29:44.810284    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:29:44.810296    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:29:44.810304    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:29:44.810312    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:29:44.810318    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:29:44.810326    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:29:44.810340    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:29:44.810353    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:29:44.810368    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:29:46.810523    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 25
	I0926 18:29:46.810538    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:29:46.810591    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6181
	I0926 18:29:46.811422    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for aa:ee:1c:8f:8e:9e in /var/db/dhcpd_leases ...
	I0926 18:29:46.811478    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:29:46.811503    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:29:46.811518    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:29:46.811529    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:29:46.811540    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:29:46.811547    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:29:46.811555    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:29:46.811575    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:29:46.811588    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:29:46.811596    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:29:46.811605    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:29:46.811617    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:29:46.811628    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:29:46.811637    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:29:46.811650    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:29:46.811658    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:29:46.811669    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:29:46.811676    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:29:46.811683    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:29:48.812103    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 26
	I0926 18:29:48.812116    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:29:48.812196    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6181
	I0926 18:29:48.813132    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for aa:ee:1c:8f:8e:9e in /var/db/dhcpd_leases ...
	I0926 18:29:48.813169    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:29:48.813183    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:29:48.813202    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:29:48.813237    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:29:48.813248    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:29:48.813256    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:29:48.813261    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:29:48.813268    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:29:48.813273    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:29:48.813280    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:29:48.813285    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:29:48.813299    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:29:48.813310    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:29:48.813318    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:29:48.813324    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:29:48.813340    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:29:48.813350    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:29:48.813358    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:29:48.813367    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:29:50.815315    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 27
	I0926 18:29:50.815330    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:29:50.815398    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6181
	I0926 18:29:50.816208    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for aa:ee:1c:8f:8e:9e in /var/db/dhcpd_leases ...
	I0926 18:29:50.816259    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:29:50.816269    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:29:50.816282    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:29:50.816288    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:29:50.816295    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:29:50.816301    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:29:50.816307    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:29:50.816316    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:29:50.816324    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:29:50.816331    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:29:50.816338    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:29:50.816357    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:29:50.816370    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:29:50.816379    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:29:50.816386    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:29:50.816394    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:29:50.816401    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:29:50.816412    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:29:50.816421    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:29:52.817231    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 28
	I0926 18:29:52.817245    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:29:52.817310    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6181
	I0926 18:29:52.818145    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for aa:ee:1c:8f:8e:9e in /var/db/dhcpd_leases ...
	I0926 18:29:52.818190    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:29:52.818205    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:29:52.818224    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:29:52.818241    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:29:52.818249    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:29:52.818259    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:29:52.818266    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:29:52.818279    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:29:52.818288    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:29:52.818295    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:29:52.818308    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:29:52.818315    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:29:52.818323    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:29:52.818336    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:29:52.818348    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:29:52.818358    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:29:52.818366    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:29:52.818380    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:29:52.818390    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:29:54.818744    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Attempt 29
	I0926 18:29:54.818759    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:29:54.818843    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | hyperkit pid from json: 6181
	I0926 18:29:54.819713    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Searching for aa:ee:1c:8f:8e:9e in /var/db/dhcpd_leases ...
	I0926 18:29:54.819768    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0926 18:29:54.819780    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:aa:2d:b1:fe:37 ID:1,96:aa:2d:b1:fe:37 Lease:0x66f75ab6}
	I0926 18:29:54.819789    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:71:7e:dd:30:b5 ID:1,4e:71:7e:dd:30:b5 Lease:0x66f759f7}
	I0926 18:29:54.819796    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:ea:e8:8c:da:fa ID:1,1a:ea:e8:8c:da:fa Lease:0x66f75938}
	I0926 18:29:54.819813    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:29:54.819820    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f607e4}
	I0926 18:29:54.819837    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:29:54.819848    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:56:30:fb:e6:60:d ID:1,56:30:fb:e6:60:d Lease:0x66f604eb}
	I0926 18:29:54.819856    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:ef:3c:87:4:31 ID:1,b2:ef:3c:87:4:31 Lease:0x66f75628}
	I0926 18:29:54.819870    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:65:d0:13:f8:f7 ID:1,9e:65:d0:13:f8:f7 Lease:0x66f755cf}
	I0926 18:29:54.819877    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:2e:6c:c5:be:6:1c ID:1,2e:6c:c5:be:6:1c Lease:0x66f755a1}
	I0926 18:29:54.819883    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f60414}
	I0926 18:29:54.819924    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 18:29:54.819935    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 18:29:54.819948    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 18:29:54.819955    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 18:29:54.819965    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 18:29:54.819974    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 18:29:54.819988    6117 main.go:141] libmachine: (force-systemd-env-761000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 18:29:56.820898    6117 client.go:171] duration metric: took 1m0.935861789s to LocalClient.Create
	I0926 18:29:58.821860    6117 start.go:128] duration metric: took 1m2.989835024s to createHost
	I0926 18:29:58.821892    6117 start.go:83] releasing machines lock for "force-systemd-env-761000", held for 1m2.989947417s
	W0926 18:29:58.822005    6117 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p force-systemd-env-761000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for aa:ee:1c:8f:8e:9e
	* Failed to start hyperkit VM. Running "minikube delete -p force-systemd-env-761000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for aa:ee:1c:8f:8e:9e
	I0926 18:29:58.885222    6117 out.go:201] 
	W0926 18:29:58.906442    6117 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for aa:ee:1c:8f:8e:9e
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for aa:ee:1c:8f:8e:9e
	W0926 18:29:58.906456    6117 out.go:270] * 
	* 
	W0926 18:29:58.907173    6117 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 18:29:58.969442    6117 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-env-761000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-761000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-env-761000 ssh "docker info --format {{.CgroupDriver}}": exit status 50 (181.849326ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node force-systemd-env-761000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-env-761000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 50
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-09-26 18:29:59.26754 -0700 PDT m=+4562.001883626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-761000 -n force-systemd-env-761000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-761000 -n force-systemd-env-761000: exit status 7 (80.314005ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0926 18:29:59.345896    6214 status.go:386] failed to get driver ip: getting IP: IP address is not set
	E0926 18:29:59.345915    6214 status.go:119] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-761000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "force-systemd-env-761000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-761000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-761000: (5.265012627s)
--- FAIL: TestForceSystemdEnv (233.75s)

                                                
                                    
x
+
TestErrorSpam/setup (76.52s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-580000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-580000 --driver=hyperkit 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p nospam-580000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-580000 --driver=hyperkit : exit status 90 (1m16.510767593s)

                                                
                                                
-- stdout --
	* [nospam-580000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1128/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1128/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "nospam-580000" primary control-plane node in "nospam-580000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 27 00:29:30 nospam-580000 systemd[1]: Starting Docker Application Container Engine...
	Sep 27 00:29:30 nospam-580000 dockerd[508]: time="2024-09-27T00:29:30.387450813Z" level=info msg="Starting up"
	Sep 27 00:29:30 nospam-580000 dockerd[508]: time="2024-09-27T00:29:30.388089344Z" level=info msg="containerd not running, starting managed containerd"
	Sep 27 00:29:30 nospam-580000 dockerd[508]: time="2024-09-27T00:29:30.388627728Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=515
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.404667238Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.419896803Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.419918714Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.419955362Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.419965424Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.420013036Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.420045451Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.420171753Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.420206782Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.420218572Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.420225426Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.420279293Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.420467848Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.421992753Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.422009768Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.422091019Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.422124284Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.422196990Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.422237739Z" level=info msg="metadata content store policy set" policy=shared
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.424671057Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.424791614Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.424806285Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.424821281Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.424833690Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.424901145Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425113475Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425207453Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425241374Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425253422Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425262917Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425271152Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425288828Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425301567Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425311084Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425325905Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425337892Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425346043Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425359833Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425368896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425378485Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425388720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425396893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425405107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425412363Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425455435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425468241Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425494652Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425506365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425515337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425523786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425541676Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425557957Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425566870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425574891Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425627094Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425644001Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425652046Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425659898Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425666894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425674671Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425683742Z" level=info msg="NRI interface is disabled by configuration."
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425810870Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425863735Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425893971Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425904667Z" level=info msg="containerd successfully booted in 0.021992s"
	Sep 27 00:29:31 nospam-580000 dockerd[508]: time="2024-09-27T00:29:31.417063682Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 27 00:29:31 nospam-580000 dockerd[508]: time="2024-09-27T00:29:31.429397484Z" level=info msg="Loading containers: start."
	Sep 27 00:29:31 nospam-580000 dockerd[508]: time="2024-09-27T00:29:31.512436278Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 27 00:29:31 nospam-580000 dockerd[508]: time="2024-09-27T00:29:31.601408503Z" level=info msg="Loading containers: done."
	Sep 27 00:29:31 nospam-580000 dockerd[508]: time="2024-09-27T00:29:31.625936307Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Sep 27 00:29:31 nospam-580000 dockerd[508]: time="2024-09-27T00:29:31.626030206Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Sep 27 00:29:31 nospam-580000 dockerd[508]: time="2024-09-27T00:29:31.626070619Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Sep 27 00:29:31 nospam-580000 dockerd[508]: time="2024-09-27T00:29:31.626181104Z" level=info msg="Daemon has completed initialization"
	Sep 27 00:29:31 nospam-580000 dockerd[508]: time="2024-09-27T00:29:31.651393444Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 27 00:29:31 nospam-580000 dockerd[508]: time="2024-09-27T00:29:31.651569505Z" level=info msg="API listen on [::]:2376"
	Sep 27 00:29:31 nospam-580000 systemd[1]: Started Docker Application Container Engine.
	Sep 27 00:29:32 nospam-580000 systemd[1]: Stopping Docker Application Container Engine...
	Sep 27 00:29:32 nospam-580000 dockerd[508]: time="2024-09-27T00:29:32.594928366Z" level=info msg="Processing signal 'terminated'"
	Sep 27 00:29:32 nospam-580000 dockerd[508]: time="2024-09-27T00:29:32.595959320Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 27 00:29:32 nospam-580000 dockerd[508]: time="2024-09-27T00:29:32.596074374Z" level=info msg="Daemon shutdown complete"
	Sep 27 00:29:32 nospam-580000 dockerd[508]: time="2024-09-27T00:29:32.596156307Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 27 00:29:32 nospam-580000 dockerd[508]: time="2024-09-27T00:29:32.596169880Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 27 00:29:33 nospam-580000 systemd[1]: docker.service: Deactivated successfully.
	Sep 27 00:29:33 nospam-580000 systemd[1]: Stopped Docker Application Container Engine.
	Sep 27 00:29:33 nospam-580000 systemd[1]: Starting Docker Application Container Engine...
	Sep 27 00:29:33 nospam-580000 dockerd[910]: time="2024-09-27T00:29:33.632906200Z" level=info msg="Starting up"
	Sep 27 00:30:33 nospam-580000 dockerd[910]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 27 00:30:33 nospam-580000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 27 00:30:33 nospam-580000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 27 00:30:33 nospam-580000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-amd64 start -p nospam-580000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-580000 --driver=hyperkit " failed: exit status 90
error_spam_test.go:96: unexpected stderr: "X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1"
error_spam_test.go:96: unexpected stderr: "stdout:"
error_spam_test.go:96: unexpected stderr: "stderr:"
error_spam_test.go:96: unexpected stderr: "Job for docker.service failed because the control process exited with error code."
error_spam_test.go:96: unexpected stderr: "See \"systemctl status docker.service\" and \"journalctl -xeu docker.service\" for details."
error_spam_test.go:96: unexpected stderr: "sudo journalctl --no-pager -u docker:"
error_spam_test.go:96: unexpected stderr: "-- stdout --"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 systemd[1]: Starting Docker Application Container Engine..."
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[508]: time=\"2024-09-27T00:29:30.387450813Z\" level=info msg=\"Starting up\""
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[508]: time=\"2024-09-27T00:29:30.388089344Z\" level=info msg=\"containerd not running, starting managed containerd\""
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[508]: time=\"2024-09-27T00:29:30.388627728Z\" level=info msg=\"started new containerd process\" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=515"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.404667238Z\" level=info msg=\"starting containerd\" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.419896803Z\" level=info msg=\"loading plugin \\\"io.containerd.event.v1.exchange\\\"...\" type=io.containerd.event.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.419918714Z\" level=info msg=\"loading plugin \\\"io.containerd.internal.v1.opt\\\"...\" type=io.containerd.internal.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.419955362Z\" level=info msg=\"loading plugin \\\"io.containerd.warning.v1.deprecations\\\"...\" type=io.containerd.warning.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.419965424Z\" level=info msg=\"loading plugin \\\"io.containerd.snapshotter.v1.blockfile\\\"...\" type=io.containerd.snapshotter.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.420013036Z\" level=info msg=\"skip loading plugin \\\"io.containerd.snapshotter.v1.blockfile\\\"...\" error=\"no scratch file generator: skip plugin\" type=io.containerd.snapshotter.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.420045451Z\" level=info msg=\"loading plugin \\\"io.containerd.snapshotter.v1.btrfs\\\"...\" type=io.containerd.snapshotter.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.420171753Z\" level=info msg=\"skip loading plugin \\\"io.containerd.snapshotter.v1.btrfs\\\"...\" error=\"path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin\" type=io.containerd.snapshotter.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.420206782Z\" level=info msg=\"loading plugin \\\"io.containerd.snapshotter.v1.devmapper\\\"...\" type=io.containerd.snapshotter.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.420218572Z\" level=info msg=\"skip loading plugin \\\"io.containerd.snapshotter.v1.devmapper\\\"...\" error=\"devmapper not configured: skip plugin\" type=io.containerd.snapshotter.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.420225426Z\" level=info msg=\"loading plugin \\\"io.containerd.snapshotter.v1.native\\\"...\" type=io.containerd.snapshotter.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.420279293Z\" level=info msg=\"loading plugin \\\"io.containerd.snapshotter.v1.overlayfs\\\"...\" type=io.containerd.snapshotter.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.420467848Z\" level=info msg=\"loading plugin \\\"io.containerd.snapshotter.v1.aufs\\\"...\" type=io.containerd.snapshotter.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.421992753Z\" level=info msg=\"skip loading plugin \\\"io.containerd.snapshotter.v1.aufs\\\"...\" error=\"aufs is not supported (modprobe aufs failed: exit status 1 \\\"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\\\n\\\"): skip plugin\" type=io.containerd.snapshotter.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.422009768Z\" level=info msg=\"loading plugin \\\"io.containerd.snapshotter.v1.zfs\\\"...\" type=io.containerd.snapshotter.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.422091019Z\" level=info msg=\"skip loading plugin \\\"io.containerd.snapshotter.v1.zfs\\\"...\" error=\"path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin\" type=io.containerd.snapshotter.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.422124284Z\" level=info msg=\"loading plugin \\\"io.containerd.content.v1.content\\\"...\" type=io.containerd.content.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.422196990Z\" level=info msg=\"loading plugin \\\"io.containerd.metadata.v1.bolt\\\"...\" type=io.containerd.metadata.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.422237739Z\" level=info msg=\"metadata content store policy set\" policy=shared"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.424671057Z\" level=info msg=\"loading plugin \\\"io.containerd.gc.v1.scheduler\\\"...\" type=io.containerd.gc.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.424791614Z\" level=info msg=\"loading plugin \\\"io.containerd.differ.v1.walking\\\"...\" type=io.containerd.differ.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.424806285Z\" level=info msg=\"loading plugin \\\"io.containerd.lease.v1.manager\\\"...\" type=io.containerd.lease.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.424821281Z\" level=info msg=\"loading plugin \\\"io.containerd.streaming.v1.manager\\\"...\" type=io.containerd.streaming.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.424833690Z\" level=info msg=\"loading plugin \\\"io.containerd.runtime.v1.linux\\\"...\" type=io.containerd.runtime.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.424901145Z\" level=info msg=\"loading plugin \\\"io.containerd.monitor.v1.cgroups\\\"...\" type=io.containerd.monitor.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.425113475Z\" level=info msg=\"loading plugin \\\"io.containerd.runtime.v2.task\\\"...\" type=io.containerd.runtime.v2"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.425207453Z\" level=info msg=\"loading plugin \\\"io.containerd.runtime.v2.shim\\\"...\" type=io.containerd.runtime.v2"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.425241374Z\" level=info msg=\"loading plugin \\\"io.containerd.sandbox.store.v1.local\\\"...\" type=io.containerd.sandbox.store.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.425253422Z\" level=info msg=\"loading plugin \\\"io.containerd.sandbox.controller.v1.local\\\"...\" type=io.containerd.sandbox.controller.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.425262917Z\" level=info msg=\"loading plugin \\\"io.containerd.service.v1.containers-service\\\"...\" type=io.containerd.service.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.425271152Z\" level=info msg=\"loading plugin \\\"io.containerd.service.v1.content-service\\\"...\" type=io.containerd.service.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.425288828Z\" level=info msg=\"loading plugin \\\"io.containerd.service.v1.diff-service\\\"...\" type=io.containerd.service.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.425301567Z\" level=info msg=\"loading plugin \\\"io.containerd.service.v1.images-service\\\"...\" type=io.containerd.service.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.425311084Z\" level=info msg=\"loading plugin \\\"io.containerd.service.v1.introspection-service\\\"...\" type=io.containerd.service.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.425325905Z\" level=info msg=\"loading plugin \\\"io.containerd.service.v1.namespaces-service\\\"...\" type=io.containerd.service.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.425337892Z\" level=info msg=\"loading plugin \\\"io.containerd.service.v1.snapshots-service\\\"...\" type=io.containerd.service.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.425346043Z\" level=info msg=\"loading plugin \\\"io.containerd.service.v1.tasks-service\\\"...\" type=io.containerd.service.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.425359833Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.containers\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.425368896Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.content\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.425378485Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.diff\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.425388720Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.events\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.425396893Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.images\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.425405107Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.introspection\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.425412363Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.leases\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.425455435Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.namespaces\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.425468241Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.sandbox-controllers\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.425494652Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.sandboxes\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.425506365Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.snapshots\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.425515337Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.streaming\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.425523786Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.tasks\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.425541676Z\" level=info msg=\"loading plugin \\\"io.containerd.transfer.v1.local\\\"...\" type=io.containerd.transfer.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.425557957Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.transfer\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.425566870Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.version\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.425574891Z\" level=info msg=\"loading plugin \\\"io.containerd.internal.v1.restart\\\"...\" type=io.containerd.internal.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.425627094Z\" level=info msg=\"loading plugin \\\"io.containerd.tracing.processor.v1.otlp\\\"...\" type=io.containerd.tracing.processor.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.425644001Z\" level=info msg=\"skip loading plugin \\\"io.containerd.tracing.processor.v1.otlp\\\"...\" error=\"skip plugin: tracing endpoint not configured\" type=io.containerd.tracing.processor.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.425652046Z\" level=info msg=\"loading plugin \\\"io.containerd.internal.v1.tracing\\\"...\" type=io.containerd.internal.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.425659898Z\" level=info msg=\"skip loading plugin \\\"io.containerd.internal.v1.tracing\\\"...\" error=\"skip plugin: tracing endpoint not configured\" type=io.containerd.internal.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.425666894Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.healthcheck\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.425674671Z\" level=info msg=\"loading plugin \\\"io.containerd.nri.v1.nri\\\"...\" type=io.containerd.nri.v1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.425683742Z\" level=info msg=\"NRI interface is disabled by configuration.\""
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.425810870Z\" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.425863735Z\" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.425893971Z\" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:30 nospam-580000 dockerd[515]: time=\"2024-09-27T00:29:30.425904667Z\" level=info msg=\"containerd successfully booted in 0.021992s\""
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:31 nospam-580000 dockerd[508]: time=\"2024-09-27T00:29:31.417063682Z\" level=info msg=\"[graphdriver] trying configured driver: overlay2\""
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:31 nospam-580000 dockerd[508]: time=\"2024-09-27T00:29:31.429397484Z\" level=info msg=\"Loading containers: start.\""
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:31 nospam-580000 dockerd[508]: time=\"2024-09-27T00:29:31.512436278Z\" level=warning msg=\"ip6tables is enabled, but cannot set up ip6tables chains\" error=\"failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\\nPerhaps ip6tables or your kernel needs to be upgraded.\\n (exit status 3)\""
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:31 nospam-580000 dockerd[508]: time=\"2024-09-27T00:29:31.601408503Z\" level=info msg=\"Loading containers: done.\""
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:31 nospam-580000 dockerd[508]: time=\"2024-09-27T00:29:31.625936307Z\" level=warning msg=\"WARNING: bridge-nf-call-iptables is disabled\""
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:31 nospam-580000 dockerd[508]: time=\"2024-09-27T00:29:31.626030206Z\" level=warning msg=\"WARNING: bridge-nf-call-ip6tables is disabled\""
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:31 nospam-580000 dockerd[508]: time=\"2024-09-27T00:29:31.626070619Z\" level=info msg=\"Docker daemon\" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:31 nospam-580000 dockerd[508]: time=\"2024-09-27T00:29:31.626181104Z\" level=info msg=\"Daemon has completed initialization\""
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:31 nospam-580000 dockerd[508]: time=\"2024-09-27T00:29:31.651393444Z\" level=info msg=\"API listen on /var/run/docker.sock\""
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:31 nospam-580000 dockerd[508]: time=\"2024-09-27T00:29:31.651569505Z\" level=info msg=\"API listen on [::]:2376\""
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:31 nospam-580000 systemd[1]: Started Docker Application Container Engine."
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:32 nospam-580000 systemd[1]: Stopping Docker Application Container Engine..."
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:32 nospam-580000 dockerd[508]: time=\"2024-09-27T00:29:32.594928366Z\" level=info msg=\"Processing signal 'terminated'\""
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:32 nospam-580000 dockerd[508]: time=\"2024-09-27T00:29:32.595959320Z\" level=info msg=\"stopping event stream following graceful shutdown\" error=\"<nil>\" module=libcontainerd namespace=moby"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:32 nospam-580000 dockerd[508]: time=\"2024-09-27T00:29:32.596074374Z\" level=info msg=\"Daemon shutdown complete\""
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:32 nospam-580000 dockerd[508]: time=\"2024-09-27T00:29:32.596156307Z\" level=info msg=\"stopping event stream following graceful shutdown\" error=\"context canceled\" module=libcontainerd namespace=plugins.moby"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:32 nospam-580000 dockerd[508]: time=\"2024-09-27T00:29:32.596169880Z\" level=info msg=\"stopping healthcheck following graceful shutdown\" module=libcontainerd"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:33 nospam-580000 systemd[1]: docker.service: Deactivated successfully."
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:33 nospam-580000 systemd[1]: Stopped Docker Application Container Engine."
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:33 nospam-580000 systemd[1]: Starting Docker Application Container Engine..."
error_spam_test.go:96: unexpected stderr: "Sep 27 00:29:33 nospam-580000 dockerd[910]: time=\"2024-09-27T00:29:33.632906200Z\" level=info msg=\"Starting up\""
error_spam_test.go:96: unexpected stderr: "Sep 27 00:30:33 nospam-580000 dockerd[910]: failed to start daemon: failed to dial \"/run/containerd/containerd.sock\": failed to dial \"/run/containerd/containerd.sock\": context deadline exceeded"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:30:33 nospam-580000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE"
error_spam_test.go:96: unexpected stderr: "Sep 27 00:30:33 nospam-580000 systemd[1]: docker.service: Failed with result 'exit-code'."
error_spam_test.go:96: unexpected stderr: "Sep 27 00:30:33 nospam-580000 systemd[1]: Failed to start Docker Application Container Engine."
error_spam_test.go:96: unexpected stderr: "-- /stdout --"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-580000] minikube v1.34.0 on Darwin 14.6.1
- MINIKUBE_LOCATION=19711
- KUBECONFIG=/Users/jenkins/minikube-integration/19711-1128/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1128/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the hyperkit driver based on user configuration
* Starting "nospam-580000" primary control-plane node in "nospam-580000" cluster
* Creating hyperkit VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:

                                                
                                                
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.

                                                
                                                
sudo journalctl --no-pager -u docker:
-- stdout --
Sep 27 00:29:30 nospam-580000 systemd[1]: Starting Docker Application Container Engine...
Sep 27 00:29:30 nospam-580000 dockerd[508]: time="2024-09-27T00:29:30.387450813Z" level=info msg="Starting up"
Sep 27 00:29:30 nospam-580000 dockerd[508]: time="2024-09-27T00:29:30.388089344Z" level=info msg="containerd not running, starting managed containerd"
Sep 27 00:29:30 nospam-580000 dockerd[508]: time="2024-09-27T00:29:30.388627728Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=515
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.404667238Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.419896803Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.419918714Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.419955362Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.419965424Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.420013036Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.420045451Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.420171753Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.420206782Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.420218572Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.420225426Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.420279293Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.420467848Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.421992753Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.422009768Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.422091019Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.422124284Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.422196990Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.422237739Z" level=info msg="metadata content store policy set" policy=shared
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.424671057Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.424791614Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.424806285Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.424821281Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.424833690Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.424901145Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425113475Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425207453Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425241374Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425253422Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425262917Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425271152Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425288828Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425301567Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425311084Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425325905Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425337892Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425346043Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425359833Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425368896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425378485Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425388720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425396893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425405107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425412363Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425455435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425468241Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425494652Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425506365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425515337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425523786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425541676Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425557957Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425566870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425574891Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425627094Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425644001Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425652046Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425659898Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425666894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425674671Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425683742Z" level=info msg="NRI interface is disabled by configuration."
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425810870Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425863735Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425893971Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Sep 27 00:29:30 nospam-580000 dockerd[515]: time="2024-09-27T00:29:30.425904667Z" level=info msg="containerd successfully booted in 0.021992s"
Sep 27 00:29:31 nospam-580000 dockerd[508]: time="2024-09-27T00:29:31.417063682Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Sep 27 00:29:31 nospam-580000 dockerd[508]: time="2024-09-27T00:29:31.429397484Z" level=info msg="Loading containers: start."
Sep 27 00:29:31 nospam-580000 dockerd[508]: time="2024-09-27T00:29:31.512436278Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
Sep 27 00:29:31 nospam-580000 dockerd[508]: time="2024-09-27T00:29:31.601408503Z" level=info msg="Loading containers: done."
Sep 27 00:29:31 nospam-580000 dockerd[508]: time="2024-09-27T00:29:31.625936307Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
Sep 27 00:29:31 nospam-580000 dockerd[508]: time="2024-09-27T00:29:31.626030206Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
Sep 27 00:29:31 nospam-580000 dockerd[508]: time="2024-09-27T00:29:31.626070619Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
Sep 27 00:29:31 nospam-580000 dockerd[508]: time="2024-09-27T00:29:31.626181104Z" level=info msg="Daemon has completed initialization"
Sep 27 00:29:31 nospam-580000 dockerd[508]: time="2024-09-27T00:29:31.651393444Z" level=info msg="API listen on /var/run/docker.sock"
Sep 27 00:29:31 nospam-580000 dockerd[508]: time="2024-09-27T00:29:31.651569505Z" level=info msg="API listen on [::]:2376"
Sep 27 00:29:31 nospam-580000 systemd[1]: Started Docker Application Container Engine.
Sep 27 00:29:32 nospam-580000 systemd[1]: Stopping Docker Application Container Engine...
Sep 27 00:29:32 nospam-580000 dockerd[508]: time="2024-09-27T00:29:32.594928366Z" level=info msg="Processing signal 'terminated'"
Sep 27 00:29:32 nospam-580000 dockerd[508]: time="2024-09-27T00:29:32.595959320Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Sep 27 00:29:32 nospam-580000 dockerd[508]: time="2024-09-27T00:29:32.596074374Z" level=info msg="Daemon shutdown complete"
Sep 27 00:29:32 nospam-580000 dockerd[508]: time="2024-09-27T00:29:32.596156307Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Sep 27 00:29:32 nospam-580000 dockerd[508]: time="2024-09-27T00:29:32.596169880Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Sep 27 00:29:33 nospam-580000 systemd[1]: docker.service: Deactivated successfully.
Sep 27 00:29:33 nospam-580000 systemd[1]: Stopped Docker Application Container Engine.
Sep 27 00:29:33 nospam-580000 systemd[1]: Starting Docker Application Container Engine...
Sep 27 00:29:33 nospam-580000 dockerd[910]: time="2024-09-27T00:29:33.632906200Z" level=info msg="Starting up"
Sep 27 00:30:33 nospam-580000 dockerd[910]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Sep 27 00:30:33 nospam-580000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Sep 27 00:30:33 nospam-580000 systemd[1]: docker.service: Failed with result 'exit-code'.
Sep 27 00:30:33 nospam-580000 systemd[1]: Failed to start Docker Application Container Engine.

                                                
                                                
-- /stdout --
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (76.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (103.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-476000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-amd64 stop -p ha-476000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-amd64 stop -p ha-476000 -v=7 --alsologtostderr: (27.088578168s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-476000 --wait=true -v=7 --alsologtostderr
E0926 17:48:14.414856    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:48:20.717641    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/functional-748000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ha-476000 --wait=true -v=7 --alsologtostderr: exit status 90 (1m16.239338068s)

                                                
                                                
-- stdout --
	* [ha-476000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1128/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1128/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "ha-476000" primary control-plane node in "ha-476000" cluster
	* Restarting existing hyperkit VM for "ha-476000" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 17:47:49.344504    4056 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:47:49.344779    4056 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:47:49.344784    4056 out.go:358] Setting ErrFile to fd 2...
	I0926 17:47:49.344787    4056 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:47:49.344947    4056 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1128/.minikube/bin
	I0926 17:47:49.346385    4056 out.go:352] Setting JSON to false
	I0926 17:47:49.371304    4056 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2839,"bootTime":1727395230,"procs":434,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0926 17:47:49.371395    4056 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 17:47:49.393491    4056 out.go:177] * [ha-476000] minikube v1.34.0 on Darwin 14.6.1
	I0926 17:47:49.435524    4056 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 17:47:49.435648    4056 notify.go:220] Checking for updates...
	I0926 17:47:49.478441    4056 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1128/kubeconfig
	I0926 17:47:49.501308    4056 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0926 17:47:49.522159    4056 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 17:47:49.543435    4056 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1128/.minikube
	I0926 17:47:49.564398    4056 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 17:47:49.586085    4056 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:47:49.586277    4056 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 17:47:49.587040    4056 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:47:49.587118    4056 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:47:49.596520    4056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51927
	I0926 17:47:49.596885    4056 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:47:49.597287    4056 main.go:141] libmachine: Using API Version  1
	I0926 17:47:49.597296    4056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:47:49.597506    4056 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:47:49.597668    4056 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:47:49.626406    4056 out.go:177] * Using the hyperkit driver based on existing profile
	I0926 17:47:49.668181    4056 start.go:297] selected driver: hyperkit
	I0926 17:47:49.668210    4056 start.go:901] validating driver "hyperkit" against &{Name:ha-476000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:ha-476000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:fals
e efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 17:47:49.668486    4056 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 17:47:49.668687    4056 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 17:47:49.668923    4056 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19711-1128/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0926 17:47:49.678647    4056 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0926 17:47:49.683849    4056 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:47:49.683873    4056 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0926 17:47:49.687210    4056 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 17:47:49.687252    4056 cni.go:84] Creating CNI manager for ""
	I0926 17:47:49.687295    4056 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0926 17:47:49.687366    4056 start.go:340] cluster config:
	{Name:ha-476000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-476000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inacce
l:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 17:47:49.687488    4056 iso.go:125] acquiring lock: {Name:mka8a9c5a237c1e4ae233281d2ff7965d13a843d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 17:47:49.729200    4056 out.go:177] * Starting "ha-476000" primary control-plane node in "ha-476000" cluster
	I0926 17:47:49.750148    4056 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 17:47:49.750224    4056 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0926 17:47:49.750253    4056 cache.go:56] Caching tarball of preloaded images
	I0926 17:47:49.750449    4056 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0926 17:47:49.750467    4056 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 17:47:49.750667    4056 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/config.json ...
	I0926 17:47:49.751646    4056 start.go:360] acquireMachinesLock for ha-476000: {Name:mk62b0ec9b6170d1971239573141a625c4a2621e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 17:47:49.751769    4056 start.go:364] duration metric: took 97.652µs to acquireMachinesLock for "ha-476000"
	I0926 17:47:49.751807    4056 start.go:96] Skipping create...Using existing machine configuration
	I0926 17:47:49.751824    4056 fix.go:54] fixHost starting: 
	I0926 17:47:49.752274    4056 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:47:49.752301    4056 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:47:49.761545    4056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51929
	I0926 17:47:49.761903    4056 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:47:49.762235    4056 main.go:141] libmachine: Using API Version  1
	I0926 17:47:49.762263    4056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:47:49.762513    4056 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:47:49.762647    4056 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:47:49.762740    4056 main.go:141] libmachine: (ha-476000) Calling .GetState
	I0926 17:47:49.762826    4056 main.go:141] libmachine: (ha-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:47:49.762892    4056 main.go:141] libmachine: (ha-476000) DBG | hyperkit pid from json: 3501
	I0926 17:47:49.763838    4056 main.go:141] libmachine: (ha-476000) DBG | hyperkit pid 3501 missing from process table
	I0926 17:47:49.763881    4056 fix.go:112] recreateIfNeeded on ha-476000: state=Stopped err=<nil>
	I0926 17:47:49.763896    4056 main.go:141] libmachine: (ha-476000) Calling .DriverName
	W0926 17:47:49.763975    4056 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 17:47:49.806414    4056 out.go:177] * Restarting existing hyperkit VM for "ha-476000" ...
	I0926 17:47:49.829263    4056 main.go:141] libmachine: (ha-476000) Calling .Start
	I0926 17:47:49.829533    4056 main.go:141] libmachine: (ha-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:47:49.829566    4056 main.go:141] libmachine: (ha-476000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/hyperkit.pid
	I0926 17:47:49.831187    4056 main.go:141] libmachine: (ha-476000) DBG | hyperkit pid 3501 missing from process table
	I0926 17:47:49.831199    4056 main.go:141] libmachine: (ha-476000) DBG | pid 3501 is in state "Stopped"
	I0926 17:47:49.831220    4056 main.go:141] libmachine: (ha-476000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/hyperkit.pid...
	I0926 17:47:49.831400    4056 main.go:141] libmachine: (ha-476000) DBG | Using UUID 99cfbb80-3e9d-4d4f-a72a-447af4e3b1db
	I0926 17:47:49.943073    4056 main.go:141] libmachine: (ha-476000) DBG | Generated MAC 96:a2:4a:f3:be:4a
	I0926 17:47:49.943098    4056 main.go:141] libmachine: (ha-476000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000
	I0926 17:47:49.943208    4056 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:47:49 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"99cfbb80-3e9d-4d4f-a72a-447af4e3b1db", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bc900)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0926 17:47:49.943237    4056 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:47:49 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"99cfbb80-3e9d-4d4f-a72a-447af4e3b1db", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bc900)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0926 17:47:49.943280    4056 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:47:49 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "99cfbb80-3e9d-4d4f-a72a-447af4e3b1db", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/ha-476000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/bzimage,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000"}
	I0926 17:47:49.943342    4056 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:47:49 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 99cfbb80-3e9d-4d4f-a72a-447af4e3b1db -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/ha-476000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/console-ring -f kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/bzimage,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000"
	I0926 17:47:49.943355    4056 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:47:49 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0926 17:47:49.944942    4056 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:47:49 DEBUG: hyperkit: Pid is 4068
	I0926 17:47:49.945281    4056 main.go:141] libmachine: (ha-476000) DBG | Attempt 0
	I0926 17:47:49.945315    4056 main.go:141] libmachine: (ha-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:47:49.945367    4056 main.go:141] libmachine: (ha-476000) DBG | hyperkit pid from json: 4068
	I0926 17:47:49.947187    4056 main.go:141] libmachine: (ha-476000) DBG | Searching for 96:a2:4a:f3:be:4a in /var/db/dhcpd_leases ...
	I0926 17:47:49.947252    4056 main.go:141] libmachine: (ha-476000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0926 17:47:49.947271    4056 main.go:141] libmachine: (ha-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 17:47:49.947281    4056 main.go:141] libmachine: (ha-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f751f8}
	I0926 17:47:49.947317    4056 main.go:141] libmachine: (ha-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f7515c}
	I0926 17:47:49.947333    4056 main.go:141] libmachine: (ha-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f750e1}
	I0926 17:47:49.947349    4056 main.go:141] libmachine: (ha-476000) DBG | Found match: 96:a2:4a:f3:be:4a
	I0926 17:47:49.947356    4056 main.go:141] libmachine: (ha-476000) DBG | IP: 192.169.0.5
	I0926 17:47:49.947363    4056 main.go:141] libmachine: (ha-476000) Calling .GetConfigRaw
	I0926 17:47:49.947996    4056 main.go:141] libmachine: (ha-476000) Calling .GetIP
	I0926 17:47:49.948240    4056 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/config.json ...
	I0926 17:47:49.948679    4056 machine.go:93] provisionDockerMachine start ...
	I0926 17:47:49.948690    4056 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:47:49.948818    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:47:49.948932    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:47:49.949029    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:47:49.949131    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:47:49.949241    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:47:49.949423    4056 main.go:141] libmachine: Using SSH client type: native
	I0926 17:47:49.949615    4056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x72fed00] 0x73019e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0926 17:47:49.949622    4056 main.go:141] libmachine: About to run SSH command:
	hostname
	I0926 17:47:49.952791    4056 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:47:49 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I0926 17:47:50.006714    4056 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:47:50 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0926 17:47:50.007436    4056 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:47:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 17:47:50.007455    4056 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:47:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 17:47:50.007463    4056 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:47:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 17:47:50.007470    4056 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:47:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 17:47:50.388608    4056 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:47:50 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0926 17:47:50.388637    4056 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:47:50 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0926 17:47:50.503254    4056 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:47:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 17:47:50.503271    4056 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:47:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 17:47:50.503282    4056 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:47:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 17:47:50.503305    4056 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:47:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 17:47:50.504161    4056 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:47:50 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0926 17:47:50.504172    4056 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:47:50 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0926 17:47:56.122495    4056 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:47:56 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0926 17:47:56.122534    4056 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:47:56 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0926 17:47:56.122545    4056 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:47:56 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0926 17:47:56.147685    4056 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:47:56 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0926 17:48:01.030201    4056 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0926 17:48:01.030215    4056 main.go:141] libmachine: (ha-476000) Calling .GetMachineName
	I0926 17:48:01.030397    4056 buildroot.go:166] provisioning hostname "ha-476000"
	I0926 17:48:01.030414    4056 main.go:141] libmachine: (ha-476000) Calling .GetMachineName
	I0926 17:48:01.030524    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:48:01.030615    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:48:01.030699    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:48:01.030804    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:48:01.030901    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:48:01.031032    4056 main.go:141] libmachine: Using SSH client type: native
	I0926 17:48:01.031180    4056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x72fed00] 0x73019e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0926 17:48:01.031188    4056 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-476000 && echo "ha-476000" | sudo tee /etc/hostname
	I0926 17:48:01.109137    4056 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-476000
	
	I0926 17:48:01.109157    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:48:01.109289    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:48:01.109382    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:48:01.109470    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:48:01.109556    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:48:01.109704    4056 main.go:141] libmachine: Using SSH client type: native
	I0926 17:48:01.109858    4056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x72fed00] 0x73019e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0926 17:48:01.109869    4056 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-476000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-476000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-476000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0926 17:48:01.182753    4056 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 17:48:01.182775    4056 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19711-1128/.minikube CaCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19711-1128/.minikube}
	I0926 17:48:01.182794    4056 buildroot.go:174] setting up certificates
	I0926 17:48:01.182799    4056 provision.go:84] configureAuth start
	I0926 17:48:01.182805    4056 main.go:141] libmachine: (ha-476000) Calling .GetMachineName
	I0926 17:48:01.182979    4056 main.go:141] libmachine: (ha-476000) Calling .GetIP
	I0926 17:48:01.183133    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:48:01.183241    4056 provision.go:143] copyHostCerts
	I0926 17:48:01.183271    4056 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem
	I0926 17:48:01.183339    4056 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem, removing ...
	I0926 17:48:01.183347    4056 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem
	I0926 17:48:01.183475    4056 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem (1123 bytes)
	I0926 17:48:01.183694    4056 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem
	I0926 17:48:01.183733    4056 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem, removing ...
	I0926 17:48:01.183740    4056 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem
	I0926 17:48:01.183819    4056 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem (1675 bytes)
	I0926 17:48:01.183954    4056 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem
	I0926 17:48:01.183992    4056 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem, removing ...
	I0926 17:48:01.183996    4056 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem
	I0926 17:48:01.184068    4056 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem (1082 bytes)
	I0926 17:48:01.184202    4056 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem org=jenkins.ha-476000 san=[127.0.0.1 192.169.0.5 ha-476000 localhost minikube]
	I0926 17:48:01.306756    4056 provision.go:177] copyRemoteCerts
	I0926 17:48:01.306830    4056 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0926 17:48:01.306846    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:48:01.306998    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:48:01.307095    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:48:01.307184    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:48:01.307305    4056 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/id_rsa Username:docker}
	I0926 17:48:01.346678    4056 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0926 17:48:01.346741    4056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0926 17:48:01.367638    4056 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0926 17:48:01.367708    4056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0926 17:48:01.387544    4056 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0926 17:48:01.387608    4056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0926 17:48:01.406824    4056 provision.go:87] duration metric: took 224.008152ms to configureAuth
	I0926 17:48:01.406840    4056 buildroot.go:189] setting minikube options for container-runtime
	I0926 17:48:01.407020    4056 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:48:01.407034    4056 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:48:01.407161    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:48:01.407246    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:48:01.407337    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:48:01.407427    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:48:01.407516    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:48:01.407657    4056 main.go:141] libmachine: Using SSH client type: native
	I0926 17:48:01.407792    4056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x72fed00] 0x73019e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0926 17:48:01.407800    4056 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0926 17:48:01.475504    4056 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0926 17:48:01.475519    4056 buildroot.go:70] root file system type: tmpfs
	I0926 17:48:01.475602    4056 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0926 17:48:01.475615    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:48:01.475758    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:48:01.475864    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:48:01.475949    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:48:01.476046    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:48:01.476192    4056 main.go:141] libmachine: Using SSH client type: native
	I0926 17:48:01.476337    4056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x72fed00] 0x73019e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0926 17:48:01.476380    4056 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0926 17:48:01.553055    4056 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0926 17:48:01.553085    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:48:01.553213    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:48:01.553310    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:48:01.553391    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:48:01.553478    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:48:01.553626    4056 main.go:141] libmachine: Using SSH client type: native
	I0926 17:48:01.553758    4056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x72fed00] 0x73019e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0926 17:48:01.553771    4056 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0926 17:48:03.251950    4056 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0926 17:48:03.251966    4056 machine.go:96] duration metric: took 13.303229152s to provisionDockerMachine
	I0926 17:48:03.251977    4056 start.go:293] postStartSetup for "ha-476000" (driver="hyperkit")
	I0926 17:48:03.251984    4056 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0926 17:48:03.251994    4056 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:48:03.252206    4056 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0926 17:48:03.252224    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:48:03.252324    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:48:03.252411    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:48:03.252501    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:48:03.252608    4056 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/id_rsa Username:docker}
	I0926 17:48:03.295555    4056 ssh_runner.go:195] Run: cat /etc/os-release
	I0926 17:48:03.299358    4056 info.go:137] Remote host: Buildroot 2023.02.9
	I0926 17:48:03.299371    4056 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1128/.minikube/addons for local assets ...
	I0926 17:48:03.299488    4056 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1128/.minikube/files for local assets ...
	I0926 17:48:03.299681    4056 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> 16792.pem in /etc/ssl/certs
	I0926 17:48:03.299687    4056 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> /etc/ssl/certs/16792.pem
	I0926 17:48:03.299892    4056 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0926 17:48:03.308751    4056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem --> /etc/ssl/certs/16792.pem (1708 bytes)
	I0926 17:48:03.340442    4056 start.go:296] duration metric: took 88.455537ms for postStartSetup
	I0926 17:48:03.340464    4056 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:48:03.340646    4056 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0926 17:48:03.340659    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:48:03.340751    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:48:03.340833    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:48:03.340922    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:48:03.340991    4056 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/id_rsa Username:docker}
	I0926 17:48:03.381594    4056 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0926 17:48:03.381662    4056 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0926 17:48:03.435599    4056 fix.go:56] duration metric: took 13.683726999s for fixHost
	I0926 17:48:03.435622    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:48:03.435757    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:48:03.435868    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:48:03.435977    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:48:03.436062    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:48:03.436231    4056 main.go:141] libmachine: Using SSH client type: native
	I0926 17:48:03.436416    4056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x72fed00] 0x73019e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0926 17:48:03.436425    4056 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0926 17:48:03.503506    4056 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727398083.607272932
	
	I0926 17:48:03.503518    4056 fix.go:216] guest clock: 1727398083.607272932
	I0926 17:48:03.503523    4056 fix.go:229] Guest: 2024-09-26 17:48:03.607272932 -0700 PDT Remote: 2024-09-26 17:48:03.435612 -0700 PDT m=+14.126437942 (delta=171.660932ms)
	I0926 17:48:03.503540    4056 fix.go:200] guest clock delta is within tolerance: 171.660932ms
	I0926 17:48:03.503544    4056 start.go:83] releasing machines lock for "ha-476000", held for 13.751713259s
	I0926 17:48:03.503561    4056 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:48:03.503697    4056 main.go:141] libmachine: (ha-476000) Calling .GetIP
	I0926 17:48:03.503806    4056 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:48:03.504113    4056 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:48:03.504213    4056 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:48:03.504298    4056 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0926 17:48:03.504328    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:48:03.504336    4056 ssh_runner.go:195] Run: cat /version.json
	I0926 17:48:03.504347    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:48:03.504440    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:48:03.504459    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:48:03.504533    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:48:03.504577    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:48:03.504622    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:48:03.504654    4056 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:48:03.504709    4056 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/id_rsa Username:docker}
	I0926 17:48:03.504735    4056 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/id_rsa Username:docker}
	I0926 17:48:03.586887    4056 ssh_runner.go:195] Run: systemctl --version
	I0926 17:48:03.592001    4056 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0926 17:48:03.596318    4056 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0926 17:48:03.596370    4056 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0926 17:48:03.608842    4056 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0926 17:48:03.608853    4056 start.go:495] detecting cgroup driver to use...
	I0926 17:48:03.608964    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 17:48:03.626338    4056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0926 17:48:03.635212    4056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0926 17:48:03.644189    4056 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0926 17:48:03.644241    4056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0926 17:48:03.653029    4056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 17:48:03.661942    4056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0926 17:48:03.670890    4056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 17:48:03.679741    4056 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0926 17:48:03.688813    4056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0926 17:48:03.697626    4056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0926 17:48:03.706499    4056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0926 17:48:03.715297    4056 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0926 17:48:03.723319    4056 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0926 17:48:03.723361    4056 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0926 17:48:03.732381    4056 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0926 17:48:03.740520    4056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:48:03.846103    4056 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0926 17:48:03.863241    4056 start.go:495] detecting cgroup driver to use...
	I0926 17:48:03.863329    4056 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0926 17:48:03.882429    4056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 17:48:03.893015    4056 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0926 17:48:03.913761    4056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 17:48:03.924859    4056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 17:48:03.935186    4056 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0926 17:48:03.963820    4056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 17:48:03.974586    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 17:48:03.989764    4056 ssh_runner.go:195] Run: which cri-dockerd
	I0926 17:48:03.992627    4056 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0926 17:48:03.999689    4056 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0926 17:48:04.013274    4056 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0926 17:48:04.108203    4056 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0926 17:48:04.223421    4056 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0926 17:48:04.223488    4056 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0926 17:48:04.237468    4056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:48:04.333075    4056 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0926 17:49:05.364703    4056 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.031384956s)
	I0926 17:49:05.364787    4056 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0926 17:49:05.400617    4056 out.go:201] 
	W0926 17:49:05.422103    4056 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 27 00:48:01 ha-476000 systemd[1]: Starting Docker Application Container Engine...
	Sep 27 00:48:01 ha-476000 dockerd[484]: time="2024-09-27T00:48:01.969363722Z" level=info msg="Starting up"
	Sep 27 00:48:01 ha-476000 dockerd[484]: time="2024-09-27T00:48:01.969815899Z" level=info msg="containerd not running, starting managed containerd"
	Sep 27 00:48:01 ha-476000 dockerd[484]: time="2024-09-27T00:48:01.970460104Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=490
	Sep 27 00:48:01 ha-476000 dockerd[490]: time="2024-09-27T00:48:01.987544031Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.003331080Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.003375112Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.003456379Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.003470278Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.003638487Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.003675080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.003845462Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.003881254Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.003893694Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.003901017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.004042604Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.004974304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.006603346Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.006638491Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.006780174Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.006846040Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.007005481Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.007051359Z" level=info msg="metadata content store policy set" policy=shared
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.010465347Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.010511299Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.010524445Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.010534970Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.010579219Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.010626241Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.010830576Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.010901488Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.010935043Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.010945943Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.010954718Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.010962836Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.010971196Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.010982947Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.010994074Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011002716Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011010624Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011017596Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011029942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011040242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011048283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011056273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011064464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011072667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011080228Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011088172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011096174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011105079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011112562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011119866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011128298Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011137813Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011157920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011166892Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011173869Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011222439Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011236365Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011244020Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011298609Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011306917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011315750Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011322327Z" level=info msg="NRI interface is disabled by configuration."
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011447699Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011522805Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011593920Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011627379Z" level=info msg="containerd successfully booted in 0.024821s"
	Sep 27 00:48:02 ha-476000 dockerd[484]: time="2024-09-27T00:48:02.998717914Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 27 00:48:03 ha-476000 dockerd[484]: time="2024-09-27T00:48:03.051888143Z" level=info msg="Loading containers: start."
	Sep 27 00:48:03 ha-476000 dockerd[484]: time="2024-09-27T00:48:03.206935043Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 27 00:48:03 ha-476000 dockerd[484]: time="2024-09-27T00:48:03.274692716Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 27 00:48:03 ha-476000 dockerd[484]: time="2024-09-27T00:48:03.324015300Z" level=warning msg="error locating sandbox id 05dc1618537a8b1b0b5482b6ccfe500de78d1ad406eccac9be31f76e76e60cdb: sandbox 05dc1618537a8b1b0b5482b6ccfe500de78d1ad406eccac9be31f76e76e60cdb not found"
	Sep 27 00:48:03 ha-476000 dockerd[484]: time="2024-09-27T00:48:03.324204156Z" level=info msg="Loading containers: done."
	Sep 27 00:48:03 ha-476000 dockerd[484]: time="2024-09-27T00:48:03.330843116Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Sep 27 00:48:03 ha-476000 dockerd[484]: time="2024-09-27T00:48:03.330902314Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Sep 27 00:48:03 ha-476000 dockerd[484]: time="2024-09-27T00:48:03.330937080Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Sep 27 00:48:03 ha-476000 dockerd[484]: time="2024-09-27T00:48:03.331323474Z" level=info msg="Daemon has completed initialization"
	Sep 27 00:48:03 ha-476000 dockerd[484]: time="2024-09-27T00:48:03.352691801Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 27 00:48:03 ha-476000 systemd[1]: Started Docker Application Container Engine.
	Sep 27 00:48:03 ha-476000 dockerd[484]: time="2024-09-27T00:48:03.353410103Z" level=info msg="API listen on [::]:2376"
	Sep 27 00:48:04 ha-476000 dockerd[484]: time="2024-09-27T00:48:04.449072587Z" level=info msg="Processing signal 'terminated'"
	Sep 27 00:48:04 ha-476000 dockerd[484]: time="2024-09-27T00:48:04.449929646Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 27 00:48:04 ha-476000 dockerd[484]: time="2024-09-27T00:48:04.449992245Z" level=info msg="Daemon shutdown complete"
	Sep 27 00:48:04 ha-476000 dockerd[484]: time="2024-09-27T00:48:04.450023112Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 27 00:48:04 ha-476000 dockerd[484]: time="2024-09-27T00:48:04.450056296Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 27 00:48:04 ha-476000 systemd[1]: Stopping Docker Application Container Engine...
	Sep 27 00:48:05 ha-476000 systemd[1]: docker.service: Deactivated successfully.
	Sep 27 00:48:05 ha-476000 systemd[1]: Stopped Docker Application Container Engine.
	Sep 27 00:48:05 ha-476000 systemd[1]: Starting Docker Application Container Engine...
	Sep 27 00:48:05 ha-476000 dockerd[1170]: time="2024-09-27T00:48:05.491847231Z" level=info msg="Starting up"
	Sep 27 00:49:05 ha-476000 dockerd[1170]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 27 00:49:05 ha-476000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 27 00:49:05 ha-476000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 27 00:49:05 ha-476000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 27 00:48:01 ha-476000 systemd[1]: Starting Docker Application Container Engine...
	Sep 27 00:48:01 ha-476000 dockerd[484]: time="2024-09-27T00:48:01.969363722Z" level=info msg="Starting up"
	Sep 27 00:48:01 ha-476000 dockerd[484]: time="2024-09-27T00:48:01.969815899Z" level=info msg="containerd not running, starting managed containerd"
	Sep 27 00:48:01 ha-476000 dockerd[484]: time="2024-09-27T00:48:01.970460104Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=490
	Sep 27 00:48:01 ha-476000 dockerd[490]: time="2024-09-27T00:48:01.987544031Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.003331080Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.003375112Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.003456379Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.003470278Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.003638487Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.003675080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.003845462Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.003881254Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.003893694Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.003901017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.004042604Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.004974304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.006603346Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.006638491Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.006780174Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.006846040Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.007005481Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.007051359Z" level=info msg="metadata content store policy set" policy=shared
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.010465347Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.010511299Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.010524445Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.010534970Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.010579219Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.010626241Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.010830576Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.010901488Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.010935043Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.010945943Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.010954718Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.010962836Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.010971196Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.010982947Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.010994074Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011002716Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011010624Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011017596Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011029942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011040242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011048283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011056273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011064464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011072667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011080228Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011088172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011096174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011105079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011112562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011119866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011128298Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011137813Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011157920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011166892Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011173869Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011222439Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011236365Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011244020Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011298609Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011306917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011315750Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011322327Z" level=info msg="NRI interface is disabled by configuration."
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011447699Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011522805Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011593920Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 27 00:48:02 ha-476000 dockerd[490]: time="2024-09-27T00:48:02.011627379Z" level=info msg="containerd successfully booted in 0.024821s"
	Sep 27 00:48:02 ha-476000 dockerd[484]: time="2024-09-27T00:48:02.998717914Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 27 00:48:03 ha-476000 dockerd[484]: time="2024-09-27T00:48:03.051888143Z" level=info msg="Loading containers: start."
	Sep 27 00:48:03 ha-476000 dockerd[484]: time="2024-09-27T00:48:03.206935043Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 27 00:48:03 ha-476000 dockerd[484]: time="2024-09-27T00:48:03.274692716Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 27 00:48:03 ha-476000 dockerd[484]: time="2024-09-27T00:48:03.324015300Z" level=warning msg="error locating sandbox id 05dc1618537a8b1b0b5482b6ccfe500de78d1ad406eccac9be31f76e76e60cdb: sandbox 05dc1618537a8b1b0b5482b6ccfe500de78d1ad406eccac9be31f76e76e60cdb not found"
	Sep 27 00:48:03 ha-476000 dockerd[484]: time="2024-09-27T00:48:03.324204156Z" level=info msg="Loading containers: done."
	Sep 27 00:48:03 ha-476000 dockerd[484]: time="2024-09-27T00:48:03.330843116Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Sep 27 00:48:03 ha-476000 dockerd[484]: time="2024-09-27T00:48:03.330902314Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Sep 27 00:48:03 ha-476000 dockerd[484]: time="2024-09-27T00:48:03.330937080Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Sep 27 00:48:03 ha-476000 dockerd[484]: time="2024-09-27T00:48:03.331323474Z" level=info msg="Daemon has completed initialization"
	Sep 27 00:48:03 ha-476000 dockerd[484]: time="2024-09-27T00:48:03.352691801Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 27 00:48:03 ha-476000 systemd[1]: Started Docker Application Container Engine.
	Sep 27 00:48:03 ha-476000 dockerd[484]: time="2024-09-27T00:48:03.353410103Z" level=info msg="API listen on [::]:2376"
	Sep 27 00:48:04 ha-476000 dockerd[484]: time="2024-09-27T00:48:04.449072587Z" level=info msg="Processing signal 'terminated'"
	Sep 27 00:48:04 ha-476000 dockerd[484]: time="2024-09-27T00:48:04.449929646Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 27 00:48:04 ha-476000 dockerd[484]: time="2024-09-27T00:48:04.449992245Z" level=info msg="Daemon shutdown complete"
	Sep 27 00:48:04 ha-476000 dockerd[484]: time="2024-09-27T00:48:04.450023112Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 27 00:48:04 ha-476000 dockerd[484]: time="2024-09-27T00:48:04.450056296Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 27 00:48:04 ha-476000 systemd[1]: Stopping Docker Application Container Engine...
	Sep 27 00:48:05 ha-476000 systemd[1]: docker.service: Deactivated successfully.
	Sep 27 00:48:05 ha-476000 systemd[1]: Stopped Docker Application Container Engine.
	Sep 27 00:48:05 ha-476000 systemd[1]: Starting Docker Application Container Engine...
	Sep 27 00:48:05 ha-476000 dockerd[1170]: time="2024-09-27T00:48:05.491847231Z" level=info msg="Starting up"
	Sep 27 00:49:05 ha-476000 dockerd[1170]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 27 00:49:05 ha-476000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 27 00:49:05 ha-476000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 27 00:49:05 ha-476000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0926 17:49:05.422224    4056 out.go:270] * 
	* 
	W0926 17:49:05.423151    4056 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 17:49:05.485306    4056 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p ha-476000 -v=7 --alsologtostderr" : exit status 90
ha_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-476000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-476000 -n ha-476000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-476000 -n ha-476000: exit status 6 (153.920778ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0926 17:49:05.740446    4084 status.go:451] kubeconfig endpoint: get endpoint: "ha-476000" does not appear in /Users/jenkins/minikube-integration/19711-1128/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ha-476000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (103.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-476000 node delete m03 -v=7 --alsologtostderr: exit status 83 (167.214763ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-476000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-476000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 17:49:05.808680    4089 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:49:05.808991    4089 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:49:05.808997    4089 out.go:358] Setting ErrFile to fd 2...
	I0926 17:49:05.809001    4089 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:49:05.809174    4089 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1128/.minikube/bin
	I0926 17:49:05.809527    4089 mustload.go:65] Loading cluster: ha-476000
	I0926 17:49:05.809873    4089 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:49:05.810235    4089 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:49:05.810277    4089 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:49:05.818545    4089 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51959
	I0926 17:49:05.818981    4089 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:49:05.819403    4089 main.go:141] libmachine: Using API Version  1
	I0926 17:49:05.819411    4089 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:49:05.819622    4089 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:49:05.819730    4089 main.go:141] libmachine: (ha-476000) Calling .GetState
	I0926 17:49:05.819838    4089 main.go:141] libmachine: (ha-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:49:05.819897    4089 main.go:141] libmachine: (ha-476000) DBG | hyperkit pid from json: 4068
	I0926 17:49:05.820845    4089 host.go:66] Checking if "ha-476000" exists ...
	I0926 17:49:05.821098    4089 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:49:05.821117    4089 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:49:05.829485    4089 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51961
	I0926 17:49:05.829832    4089 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:49:05.830172    4089 main.go:141] libmachine: Using API Version  1
	I0926 17:49:05.830185    4089 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:49:05.830456    4089 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:49:05.830588    4089 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:49:05.830958    4089 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:49:05.830988    4089 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:49:05.839187    4089 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51963
	I0926 17:49:05.839557    4089 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:49:05.839874    4089 main.go:141] libmachine: Using API Version  1
	I0926 17:49:05.839887    4089 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:49:05.840093    4089 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:49:05.840218    4089 main.go:141] libmachine: (ha-476000-m02) Calling .GetState
	I0926 17:49:05.840303    4089 main.go:141] libmachine: (ha-476000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:49:05.840378    4089 main.go:141] libmachine: (ha-476000-m02) DBG | hyperkit pid from json: 4002
	I0926 17:49:05.841288    4089 main.go:141] libmachine: (ha-476000-m02) DBG | hyperkit pid 4002 missing from process table
	W0926 17:49:05.841508    4089 out.go:270] ! The control-plane node ha-476000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-476000-m02 host is not running (will try others): state=Stopped
	I0926 17:49:05.841844    4089 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:49:05.841871    4089 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:49:05.850244    4089 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51965
	I0926 17:49:05.850623    4089 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:49:05.850955    4089 main.go:141] libmachine: Using API Version  1
	I0926 17:49:05.850964    4089 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:49:05.851196    4089 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:49:05.851324    4089 main.go:141] libmachine: (ha-476000-m03) Calling .GetState
	I0926 17:49:05.851401    4089 main.go:141] libmachine: (ha-476000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:49:05.851473    4089 main.go:141] libmachine: (ha-476000-m03) DBG | hyperkit pid from json: 3537
	I0926 17:49:05.852383    4089 main.go:141] libmachine: (ha-476000-m03) DBG | hyperkit pid 3537 missing from process table
	I0926 17:49:05.874130    4089 out.go:177] * The control-plane node ha-476000-m03 host is not running: state=Stopped
	I0926 17:49:05.897009    4089 out.go:177]   To start a cluster, run: "minikube start -p ha-476000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-amd64 -p ha-476000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-476000 status -v=7 --alsologtostderr: exit status 7 (182.880855ms)

                                                
                                                
-- stdout --
	ha-476000
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`
	ha-476000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-476000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-476000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 17:49:05.975880    4095 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:49:05.976064    4095 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:49:05.976070    4095 out.go:358] Setting ErrFile to fd 2...
	I0926 17:49:05.976073    4095 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:49:05.976244    4095 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1128/.minikube/bin
	I0926 17:49:05.976422    4095 out.go:352] Setting JSON to false
	I0926 17:49:05.976447    4095 mustload.go:65] Loading cluster: ha-476000
	I0926 17:49:05.976498    4095 notify.go:220] Checking for updates...
	I0926 17:49:05.976800    4095 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:49:05.976821    4095 status.go:174] checking status of ha-476000 ...
	I0926 17:49:05.977239    4095 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:49:05.977290    4095 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:49:05.986063    4095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51968
	I0926 17:49:05.986541    4095 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:49:05.986958    4095 main.go:141] libmachine: Using API Version  1
	I0926 17:49:05.986968    4095 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:49:05.987212    4095 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:49:05.987356    4095 main.go:141] libmachine: (ha-476000) Calling .GetState
	I0926 17:49:05.987463    4095 main.go:141] libmachine: (ha-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:49:05.987524    4095 main.go:141] libmachine: (ha-476000) DBG | hyperkit pid from json: 4068
	I0926 17:49:05.988465    4095 status.go:364] ha-476000 host status = "Running" (err=<nil>)
	I0926 17:49:05.988483    4095 host.go:66] Checking if "ha-476000" exists ...
	I0926 17:49:05.988735    4095 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:49:05.988756    4095 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:49:05.996965    4095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51970
	I0926 17:49:05.997311    4095 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:49:05.997696    4095 main.go:141] libmachine: Using API Version  1
	I0926 17:49:05.997709    4095 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:49:05.997923    4095 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:49:05.998061    4095 main.go:141] libmachine: (ha-476000) Calling .GetIP
	I0926 17:49:05.998155    4095 host.go:66] Checking if "ha-476000" exists ...
	I0926 17:49:05.998417    4095 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:49:05.998441    4095 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:49:06.006744    4095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51972
	I0926 17:49:06.007068    4095 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:49:06.007384    4095 main.go:141] libmachine: Using API Version  1
	I0926 17:49:06.007392    4095 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:49:06.007595    4095 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:49:06.007697    4095 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:49:06.007844    4095 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0926 17:49:06.007861    4095 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:49:06.007944    4095 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:49:06.008017    4095 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:49:06.008106    4095 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:49:06.008193    4095 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/id_rsa Username:docker}
	I0926 17:49:06.043981    4095 ssh_runner.go:195] Run: systemctl --version
	I0926 17:49:06.048163    4095 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	E0926 17:49:06.058917    4095 status.go:451] kubeconfig endpoint: get endpoint: "ha-476000" does not appear in /Users/jenkins/minikube-integration/19711-1128/kubeconfig
	I0926 17:49:06.058941    4095 api_server.go:166] Checking apiserver status ...
	I0926 17:49:06.058985    4095 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0926 17:49:06.068603    4095 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0926 17:49:06.068613    4095 status.go:456] ha-476000 apiserver status = Stopped (err=<nil>)
	I0926 17:49:06.068624    4095 status.go:176] ha-476000 status: &{Name:ha-476000 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0926 17:49:06.068637    4095 status.go:174] checking status of ha-476000-m02 ...
	I0926 17:49:06.068919    4095 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:49:06.068941    4095 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:49:06.077480    4095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51975
	I0926 17:49:06.077837    4095 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:49:06.078165    4095 main.go:141] libmachine: Using API Version  1
	I0926 17:49:06.078189    4095 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:49:06.078415    4095 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:49:06.078538    4095 main.go:141] libmachine: (ha-476000-m02) Calling .GetState
	I0926 17:49:06.078669    4095 main.go:141] libmachine: (ha-476000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:49:06.078691    4095 main.go:141] libmachine: (ha-476000-m02) DBG | hyperkit pid from json: 4002
	I0926 17:49:06.079653    4095 main.go:141] libmachine: (ha-476000-m02) DBG | hyperkit pid 4002 missing from process table
	I0926 17:49:06.079678    4095 status.go:364] ha-476000-m02 host status = "Stopped" (err=<nil>)
	I0926 17:49:06.079686    4095 status.go:377] host is not running, skipping remaining checks
	I0926 17:49:06.079691    4095 status.go:176] ha-476000-m02 status: &{Name:ha-476000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0926 17:49:06.079707    4095 status.go:174] checking status of ha-476000-m03 ...
	I0926 17:49:06.079975    4095 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:49:06.080002    4095 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:49:06.088686    4095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51977
	I0926 17:49:06.089055    4095 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:49:06.089398    4095 main.go:141] libmachine: Using API Version  1
	I0926 17:49:06.089409    4095 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:49:06.089636    4095 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:49:06.089757    4095 main.go:141] libmachine: (ha-476000-m03) Calling .GetState
	I0926 17:49:06.089846    4095 main.go:141] libmachine: (ha-476000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:49:06.089922    4095 main.go:141] libmachine: (ha-476000-m03) DBG | hyperkit pid from json: 3537
	I0926 17:49:06.090852    4095 main.go:141] libmachine: (ha-476000-m03) DBG | hyperkit pid 3537 missing from process table
	I0926 17:49:06.090874    4095 status.go:364] ha-476000-m03 host status = "Stopped" (err=<nil>)
	I0926 17:49:06.090882    4095 status.go:377] host is not running, skipping remaining checks
	I0926 17:49:06.090887    4095 status.go:176] ha-476000-m03 status: &{Name:ha-476000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0926 17:49:06.090898    4095 status.go:174] checking status of ha-476000-m04 ...
	I0926 17:49:06.091171    4095 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:49:06.091196    4095 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:49:06.099512    4095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51979
	I0926 17:49:06.099839    4095 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:49:06.100160    4095 main.go:141] libmachine: Using API Version  1
	I0926 17:49:06.100168    4095 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:49:06.100383    4095 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:49:06.100513    4095 main.go:141] libmachine: (ha-476000-m04) Calling .GetState
	I0926 17:49:06.100598    4095 main.go:141] libmachine: (ha-476000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:49:06.100698    4095 main.go:141] libmachine: (ha-476000-m04) DBG | hyperkit pid from json: 3636
	I0926 17:49:06.101603    4095 main.go:141] libmachine: (ha-476000-m04) DBG | hyperkit pid 3636 missing from process table
	I0926 17:49:06.101653    4095 status.go:364] ha-476000-m04 host status = "Stopped" (err=<nil>)
	I0926 17:49:06.101663    4095 status.go:377] host is not running, skipping remaining checks
	I0926 17:49:06.101667    4095 status.go:176] ha-476000-m04 status: &{Name:ha-476000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-amd64 -p ha-476000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-476000 -n ha-476000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-476000 -n ha-476000: exit status 6 (150.348154ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0926 17:49:06.242842    4103 status.go:451] kubeconfig endpoint: get endpoint: "ha-476000" does not appear in /Users/jenkins/minikube-integration/19711-1128/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ha-476000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:413: expected profile "ha-476000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-476000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-476000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACo
unt\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-476000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\"
KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.169.0.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.169.0.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\"
:false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize
\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-476000 -n ha-476000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-476000 -n ha-476000: exit status 6 (153.255921ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0926 17:49:06.624189    4117 status.go:451] kubeconfig endpoint: get endpoint: "ha-476000" does not appear in /Users/jenkins/minikube-integration/19711-1128/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ha-476000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (233.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 stop -v=7 --alsologtostderr
E0926 17:49:37.500798    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:50:36.847447    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/functional-748000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:51:04.560457    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/functional-748000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-darwin-amd64 -p ha-476000 stop -v=7 --alsologtostderr: (3m53.598935597s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-476000 status -v=7 --alsologtostderr: exit status 7 (107.871148ms)

                                                
                                                
-- stdout --
	ha-476000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-476000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-476000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-476000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 17:53:00.291243    4169 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:53:00.291510    4169 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:53:00.291516    4169 out.go:358] Setting ErrFile to fd 2...
	I0926 17:53:00.291520    4169 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:53:00.291684    4169 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1128/.minikube/bin
	I0926 17:53:00.291874    4169 out.go:352] Setting JSON to false
	I0926 17:53:00.291898    4169 mustload.go:65] Loading cluster: ha-476000
	I0926 17:53:00.291941    4169 notify.go:220] Checking for updates...
	I0926 17:53:00.292260    4169 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:53:00.292281    4169 status.go:174] checking status of ha-476000 ...
	I0926 17:53:00.292682    4169 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:53:00.292721    4169 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:53:00.301690    4169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52024
	I0926 17:53:00.302045    4169 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:53:00.302471    4169 main.go:141] libmachine: Using API Version  1
	I0926 17:53:00.302482    4169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:53:00.302762    4169 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:53:00.302901    4169 main.go:141] libmachine: (ha-476000) Calling .GetState
	I0926 17:53:00.302996    4169 main.go:141] libmachine: (ha-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:53:00.303062    4169 main.go:141] libmachine: (ha-476000) DBG | hyperkit pid from json: 4068
	I0926 17:53:00.304004    4169 status.go:364] ha-476000 host status = "Stopped" (err=<nil>)
	I0926 17:53:00.304014    4169 status.go:377] host is not running, skipping remaining checks
	I0926 17:53:00.304018    4169 status.go:176] ha-476000 status: &{Name:ha-476000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0926 17:53:00.304037    4169 status.go:174] checking status of ha-476000-m02 ...
	I0926 17:53:00.304039    4169 main.go:141] libmachine: (ha-476000) DBG | hyperkit pid 4068 missing from process table
	I0926 17:53:00.304322    4169 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:53:00.304344    4169 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:53:00.312644    4169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52027
	I0926 17:53:00.312946    4169 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:53:00.313252    4169 main.go:141] libmachine: Using API Version  1
	I0926 17:53:00.313260    4169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:53:00.313464    4169 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:53:00.313569    4169 main.go:141] libmachine: (ha-476000-m02) Calling .GetState
	I0926 17:53:00.313663    4169 main.go:141] libmachine: (ha-476000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:53:00.313725    4169 main.go:141] libmachine: (ha-476000-m02) DBG | hyperkit pid from json: 4002
	I0926 17:53:00.314640    4169 main.go:141] libmachine: (ha-476000-m02) DBG | hyperkit pid 4002 missing from process table
	I0926 17:53:00.314661    4169 status.go:364] ha-476000-m02 host status = "Stopped" (err=<nil>)
	I0926 17:53:00.314669    4169 status.go:377] host is not running, skipping remaining checks
	I0926 17:53:00.314672    4169 status.go:176] ha-476000-m02 status: &{Name:ha-476000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0926 17:53:00.314681    4169 status.go:174] checking status of ha-476000-m03 ...
	I0926 17:53:00.314937    4169 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:53:00.314961    4169 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:53:00.323192    4169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52029
	I0926 17:53:00.323515    4169 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:53:00.323843    4169 main.go:141] libmachine: Using API Version  1
	I0926 17:53:00.323859    4169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:53:00.324063    4169 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:53:00.324215    4169 main.go:141] libmachine: (ha-476000-m03) Calling .GetState
	I0926 17:53:00.324311    4169 main.go:141] libmachine: (ha-476000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:53:00.324382    4169 main.go:141] libmachine: (ha-476000-m03) DBG | hyperkit pid from json: 3537
	I0926 17:53:00.330929    4169 status.go:364] ha-476000-m03 host status = "Stopped" (err=<nil>)
	I0926 17:53:00.330938    4169 status.go:377] host is not running, skipping remaining checks
	I0926 17:53:00.330941    4169 status.go:176] ha-476000-m03 status: &{Name:ha-476000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0926 17:53:00.330951    4169 status.go:174] checking status of ha-476000-m04 ...
	I0926 17:53:00.330980    4169 main.go:141] libmachine: (ha-476000-m03) DBG | hyperkit pid 3537 missing from process table
	I0926 17:53:00.331231    4169 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:53:00.331254    4169 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:53:00.339543    4169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52031
	I0926 17:53:00.339882    4169 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:53:00.340275    4169 main.go:141] libmachine: Using API Version  1
	I0926 17:53:00.340296    4169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:53:00.340530    4169 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:53:00.340645    4169 main.go:141] libmachine: (ha-476000-m04) Calling .GetState
	I0926 17:53:00.340742    4169 main.go:141] libmachine: (ha-476000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:53:00.340821    4169 main.go:141] libmachine: (ha-476000-m04) DBG | hyperkit pid from json: 3636
	I0926 17:53:00.341738    4169 main.go:141] libmachine: (ha-476000-m04) DBG | hyperkit pid 3636 missing from process table
	I0926 17:53:00.341778    4169 status.go:364] ha-476000-m04 host status = "Stopped" (err=<nil>)
	I0926 17:53:00.341787    4169 status.go:377] host is not running, skipping remaining checks
	I0926 17:53:00.341792    4169 status.go:176] ha-476000-m04 status: &{Name:ha-476000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-amd64 -p ha-476000 status -v=7 --alsologtostderr": ha-476000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-476000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-476000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-476000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-amd64 -p ha-476000 status -v=7 --alsologtostderr": ha-476000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-476000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-476000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-476000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-amd64 -p ha-476000 status -v=7 --alsologtostderr": ha-476000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-476000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-476000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-476000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-476000 -n ha-476000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-476000 -n ha-476000: exit status 7 (69.299406ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-476000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (233.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (219.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-476000 --wait=true -v=7 --alsologtostderr --driver=hyperkit 
E0926 17:53:14.415063    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:55:36.848558    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/functional-748000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ha-476000 --wait=true -v=7 --alsologtostderr --driver=hyperkit : exit status 90 (3m35.269262442s)

                                                
                                                
-- stdout --
	* [ha-476000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1128/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1128/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "ha-476000" primary control-plane node in "ha-476000" cluster
	* Restarting existing hyperkit VM for "ha-476000" ...
	* Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	* Enabled addons: 
	
	* Starting "ha-476000-m02" control-plane node in "ha-476000" cluster
	* Restarting existing hyperkit VM for "ha-476000-m02" ...
	* Found network options:
	  - NO_PROXY=192.169.0.5
	* Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	  - env NO_PROXY=192.169.0.5
	* Verifying Kubernetes components...
	
	* Starting "ha-476000-m03" control-plane node in "ha-476000" cluster
	* Restarting existing hyperkit VM for "ha-476000-m03" ...
	* Found network options:
	  - NO_PROXY=192.169.0.5,192.169.0.6
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 17:53:00.467998    4178 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:53:00.468247    4178 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:53:00.468252    4178 out.go:358] Setting ErrFile to fd 2...
	I0926 17:53:00.468256    4178 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:53:00.468436    4178 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1128/.minikube/bin
	I0926 17:53:00.469901    4178 out.go:352] Setting JSON to false
	I0926 17:53:00.492370    4178 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3150,"bootTime":1727395230,"procs":437,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0926 17:53:00.492530    4178 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 17:53:00.514400    4178 out.go:177] * [ha-476000] minikube v1.34.0 on Darwin 14.6.1
	I0926 17:53:00.557228    4178 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 17:53:00.557300    4178 notify.go:220] Checking for updates...
	I0926 17:53:00.599719    4178 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1128/kubeconfig
	I0926 17:53:00.621009    4178 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0926 17:53:00.642091    4178 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 17:53:00.662936    4178 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1128/.minikube
	I0926 17:53:00.684204    4178 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 17:53:00.705550    4178 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:53:00.706120    4178 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:53:00.706169    4178 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:53:00.715431    4178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52037
	I0926 17:53:00.715807    4178 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:53:00.716207    4178 main.go:141] libmachine: Using API Version  1
	I0926 17:53:00.716243    4178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:53:00.716493    4178 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:53:00.716626    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:00.716833    4178 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 17:53:00.717101    4178 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:53:00.717132    4178 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:53:00.725380    4178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52039
	I0926 17:53:00.725706    4178 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:53:00.726059    4178 main.go:141] libmachine: Using API Version  1
	I0926 17:53:00.726076    4178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:53:00.726325    4178 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:53:00.726449    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:00.754773    4178 out.go:177] * Using the hyperkit driver based on existing profile
	I0926 17:53:00.797071    4178 start.go:297] selected driver: hyperkit
	I0926 17:53:00.797101    4178 start.go:901] validating driver "hyperkit" against &{Name:ha-476000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:ha-476000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclas
s:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 17:53:00.797347    4178 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 17:53:00.797543    4178 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 17:53:00.797758    4178 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19711-1128/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0926 17:53:00.807380    4178 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0926 17:53:00.811121    4178 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:53:00.811145    4178 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0926 17:53:00.813743    4178 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 17:53:00.813780    4178 cni.go:84] Creating CNI manager for ""
	I0926 17:53:00.813817    4178 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0926 17:53:00.813892    4178 start.go:340] cluster config:
	{Name:ha-476000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-476000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inacce
l:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 17:53:00.814010    4178 iso.go:125] acquiring lock: {Name:mka8a9c5a237c1e4ae233281d2ff7965d13a843d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 17:53:00.856015    4178 out.go:177] * Starting "ha-476000" primary control-plane node in "ha-476000" cluster
	I0926 17:53:00.877127    4178 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 17:53:00.877240    4178 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0926 17:53:00.877263    4178 cache.go:56] Caching tarball of preloaded images
	I0926 17:53:00.877457    4178 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0926 17:53:00.877476    4178 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 17:53:00.877658    4178 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/config.json ...
	I0926 17:53:00.878610    4178 start.go:360] acquireMachinesLock for ha-476000: {Name:mk62b0ec9b6170d1971239573141a625c4a2621e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 17:53:00.878759    4178 start.go:364] duration metric: took 97.008µs to acquireMachinesLock for "ha-476000"
	I0926 17:53:00.878828    4178 start.go:96] Skipping create...Using existing machine configuration
	I0926 17:53:00.878843    4178 fix.go:54] fixHost starting: 
	I0926 17:53:00.879324    4178 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:53:00.879362    4178 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:53:00.888435    4178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52041
	I0926 17:53:00.888799    4178 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:53:00.889164    4178 main.go:141] libmachine: Using API Version  1
	I0926 17:53:00.889177    4178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:53:00.889396    4178 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:53:00.889518    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:00.889616    4178 main.go:141] libmachine: (ha-476000) Calling .GetState
	I0926 17:53:00.889695    4178 main.go:141] libmachine: (ha-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:53:00.889775    4178 main.go:141] libmachine: (ha-476000) DBG | hyperkit pid from json: 4068
	I0926 17:53:00.890689    4178 main.go:141] libmachine: (ha-476000) DBG | hyperkit pid 4068 missing from process table
	I0926 17:53:00.890720    4178 fix.go:112] recreateIfNeeded on ha-476000: state=Stopped err=<nil>
	I0926 17:53:00.890735    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	W0926 17:53:00.890819    4178 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 17:53:00.933253    4178 out.go:177] * Restarting existing hyperkit VM for "ha-476000" ...
	I0926 17:53:00.956221    4178 main.go:141] libmachine: (ha-476000) Calling .Start
	I0926 17:53:00.956482    4178 main.go:141] libmachine: (ha-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:53:00.956522    4178 main.go:141] libmachine: (ha-476000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/hyperkit.pid
	I0926 17:53:00.958313    4178 main.go:141] libmachine: (ha-476000) DBG | hyperkit pid 4068 missing from process table
	I0926 17:53:00.958323    4178 main.go:141] libmachine: (ha-476000) DBG | pid 4068 is in state "Stopped"
	I0926 17:53:00.958337    4178 main.go:141] libmachine: (ha-476000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/hyperkit.pid...
	I0926 17:53:00.958705    4178 main.go:141] libmachine: (ha-476000) DBG | Using UUID 99cfbb80-3e9d-4d4f-a72a-447af4e3b1db
	I0926 17:53:01.067490    4178 main.go:141] libmachine: (ha-476000) DBG | Generated MAC 96:a2:4a:f3:be:4a
	I0926 17:53:01.067521    4178 main.go:141] libmachine: (ha-476000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000
	I0926 17:53:01.067590    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"99cfbb80-3e9d-4d4f-a72a-447af4e3b1db", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bea80)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0926 17:53:01.067614    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"99cfbb80-3e9d-4d4f-a72a-447af4e3b1db", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bea80)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0926 17:53:01.067680    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "99cfbb80-3e9d-4d4f-a72a-447af4e3b1db", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/ha-476000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/bzimage,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000"}
	I0926 17:53:01.067717    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 99cfbb80-3e9d-4d4f-a72a-447af4e3b1db -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/ha-476000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/console-ring -f kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/bzimage,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000"
	I0926 17:53:01.067731    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0926 17:53:01.069340    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 DEBUG: hyperkit: Pid is 4191
	I0926 17:53:01.069679    4178 main.go:141] libmachine: (ha-476000) DBG | Attempt 0
	I0926 17:53:01.069693    4178 main.go:141] libmachine: (ha-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:53:01.069753    4178 main.go:141] libmachine: (ha-476000) DBG | hyperkit pid from json: 4191
	I0926 17:53:01.071639    4178 main.go:141] libmachine: (ha-476000) DBG | Searching for 96:a2:4a:f3:be:4a in /var/db/dhcpd_leases ...
	I0926 17:53:01.071694    4178 main.go:141] libmachine: (ha-476000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0926 17:53:01.071711    4178 main.go:141] libmachine: (ha-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f7523f}
	I0926 17:53:01.071719    4178 main.go:141] libmachine: (ha-476000) DBG | Found match: 96:a2:4a:f3:be:4a
	I0926 17:53:01.071724    4178 main.go:141] libmachine: (ha-476000) DBG | IP: 192.169.0.5
	I0926 17:53:01.071801    4178 main.go:141] libmachine: (ha-476000) Calling .GetConfigRaw
	I0926 17:53:01.072466    4178 main.go:141] libmachine: (ha-476000) Calling .GetIP
	I0926 17:53:01.072682    4178 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/config.json ...
	I0926 17:53:01.073265    4178 machine.go:93] provisionDockerMachine start ...
	I0926 17:53:01.073276    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:01.073432    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:01.073553    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:01.073654    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:01.073744    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:01.073824    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:01.073962    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:01.074151    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0926 17:53:01.074160    4178 main.go:141] libmachine: About to run SSH command:
	hostname
	I0926 17:53:01.077803    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I0926 17:53:01.131821    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0926 17:53:01.132498    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 17:53:01.132519    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 17:53:01.132527    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 17:53:01.132535    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 17:53:01.515934    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0926 17:53:01.515948    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0926 17:53:01.630853    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 17:53:01.630870    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 17:53:01.630880    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 17:53:01.630889    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 17:53:01.631762    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0926 17:53:01.631773    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0926 17:53:07.224844    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:07 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0926 17:53:07.224979    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:07 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0926 17:53:07.224989    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:07 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0926 17:53:07.249067    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:07 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0926 17:53:12.148094    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0926 17:53:12.148109    4178 main.go:141] libmachine: (ha-476000) Calling .GetMachineName
	I0926 17:53:12.148318    4178 buildroot.go:166] provisioning hostname "ha-476000"
	I0926 17:53:12.148328    4178 main.go:141] libmachine: (ha-476000) Calling .GetMachineName
	I0926 17:53:12.148430    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:12.148546    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:12.148649    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.148741    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.148844    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:12.148986    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:12.149192    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0926 17:53:12.149200    4178 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-476000 && echo "ha-476000" | sudo tee /etc/hostname
	I0926 17:53:12.225889    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-476000
	
	I0926 17:53:12.225907    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:12.226039    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:12.226125    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.226235    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.226313    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:12.226463    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:12.226601    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0926 17:53:12.226612    4178 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-476000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-476000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-476000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0926 17:53:12.298491    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 17:53:12.298512    4178 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19711-1128/.minikube CaCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19711-1128/.minikube}
	I0926 17:53:12.298531    4178 buildroot.go:174] setting up certificates
	I0926 17:53:12.298537    4178 provision.go:84] configureAuth start
	I0926 17:53:12.298544    4178 main.go:141] libmachine: (ha-476000) Calling .GetMachineName
	I0926 17:53:12.298672    4178 main.go:141] libmachine: (ha-476000) Calling .GetIP
	I0926 17:53:12.298777    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:12.298858    4178 provision.go:143] copyHostCerts
	I0926 17:53:12.298890    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem
	I0926 17:53:12.298959    4178 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem, removing ...
	I0926 17:53:12.298968    4178 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem
	I0926 17:53:12.299110    4178 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem (1082 bytes)
	I0926 17:53:12.299320    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem
	I0926 17:53:12.299359    4178 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem, removing ...
	I0926 17:53:12.299364    4178 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem
	I0926 17:53:12.299452    4178 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem (1123 bytes)
	I0926 17:53:12.299596    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem
	I0926 17:53:12.299633    4178 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem, removing ...
	I0926 17:53:12.299638    4178 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem
	I0926 17:53:12.299717    4178 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem (1675 bytes)
	I0926 17:53:12.299883    4178 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem org=jenkins.ha-476000 san=[127.0.0.1 192.169.0.5 ha-476000 localhost minikube]
	I0926 17:53:12.619231    4178 provision.go:177] copyRemoteCerts
	I0926 17:53:12.619306    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0926 17:53:12.619328    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:12.619499    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:12.619617    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.619721    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:12.619805    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/id_rsa Username:docker}
	I0926 17:53:12.659598    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0926 17:53:12.659672    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0926 17:53:12.679552    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0926 17:53:12.679620    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0926 17:53:12.699069    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0926 17:53:12.699141    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0926 17:53:12.718755    4178 provision.go:87] duration metric: took 420.20261ms to configureAuth
	I0926 17:53:12.718767    4178 buildroot.go:189] setting minikube options for container-runtime
	I0926 17:53:12.718921    4178 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:53:12.718934    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:12.719072    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:12.719167    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:12.719255    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.719341    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.719422    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:12.719544    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:12.719669    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0926 17:53:12.719676    4178 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0926 17:53:12.785771    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0926 17:53:12.785788    4178 buildroot.go:70] root file system type: tmpfs
	I0926 17:53:12.785872    4178 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0926 17:53:12.785886    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:12.786022    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:12.786110    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.786193    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.786273    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:12.786415    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:12.786558    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0926 17:53:12.786601    4178 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0926 17:53:12.862455    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0926 17:53:12.862477    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:12.862607    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:12.862705    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.862800    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.862882    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:12.863016    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:12.863156    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0926 17:53:12.863169    4178 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0926 17:53:14.510518    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0926 17:53:14.510534    4178 machine.go:96] duration metric: took 13.437211612s to provisionDockerMachine
	I0926 17:53:14.510545    4178 start.go:293] postStartSetup for "ha-476000" (driver="hyperkit")
	I0926 17:53:14.510553    4178 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0926 17:53:14.510563    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:14.510765    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0926 17:53:14.510780    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:14.510875    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:14.510981    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:14.511085    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:14.511186    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/id_rsa Username:docker}
	I0926 17:53:14.553095    4178 ssh_runner.go:195] Run: cat /etc/os-release
	I0926 17:53:14.556852    4178 info.go:137] Remote host: Buildroot 2023.02.9
	I0926 17:53:14.556867    4178 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1128/.minikube/addons for local assets ...
	I0926 17:53:14.556973    4178 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1128/.minikube/files for local assets ...
	I0926 17:53:14.557159    4178 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> 16792.pem in /etc/ssl/certs
	I0926 17:53:14.557167    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> /etc/ssl/certs/16792.pem
	I0926 17:53:14.557383    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0926 17:53:14.567060    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem --> /etc/ssl/certs/16792.pem (1708 bytes)
	I0926 17:53:14.600616    4178 start.go:296] duration metric: took 90.060103ms for postStartSetup
	I0926 17:53:14.600637    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:14.600819    4178 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0926 17:53:14.600832    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:14.600912    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:14.600992    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:14.601061    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:14.601150    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/id_rsa Username:docker}
	I0926 17:53:14.640650    4178 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0926 17:53:14.640716    4178 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0926 17:53:14.694957    4178 fix.go:56] duration metric: took 13.816065248s for fixHost
	I0926 17:53:14.694980    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:14.695115    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:14.695206    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:14.695301    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:14.695399    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:14.695527    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:14.695674    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0926 17:53:14.695682    4178 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0926 17:53:14.760098    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727398394.872717718
	
	I0926 17:53:14.760109    4178 fix.go:216] guest clock: 1727398394.872717718
	I0926 17:53:14.760115    4178 fix.go:229] Guest: 2024-09-26 17:53:14.872717718 -0700 PDT Remote: 2024-09-26 17:53:14.69497 -0700 PDT m=+14.262859348 (delta=177.747718ms)
	I0926 17:53:14.760134    4178 fix.go:200] guest clock delta is within tolerance: 177.747718ms
	I0926 17:53:14.760137    4178 start.go:83] releasing machines lock for "ha-476000", held for 13.881299475s
	I0926 17:53:14.760155    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:14.760297    4178 main.go:141] libmachine: (ha-476000) Calling .GetIP
	I0926 17:53:14.760395    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:14.760729    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:14.760850    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:14.760950    4178 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0926 17:53:14.760987    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:14.761013    4178 ssh_runner.go:195] Run: cat /version.json
	I0926 17:53:14.761025    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:14.761099    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:14.761116    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:14.761194    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:14.761205    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:14.761304    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:14.761313    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:14.761398    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/id_rsa Username:docker}
	I0926 17:53:14.761432    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/id_rsa Username:docker}
	I0926 17:53:14.795855    4178 ssh_runner.go:195] Run: systemctl --version
	I0926 17:53:14.843523    4178 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0926 17:53:14.848548    4178 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0926 17:53:14.848602    4178 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0926 17:53:14.862277    4178 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0926 17:53:14.862289    4178 start.go:495] detecting cgroup driver to use...
	I0926 17:53:14.862388    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 17:53:14.879332    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0926 17:53:14.888407    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0926 17:53:14.897249    4178 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0926 17:53:14.897300    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0926 17:53:14.906191    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 17:53:14.914943    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0926 17:53:14.923611    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 17:53:14.932390    4178 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0926 17:53:14.941382    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0926 17:53:14.950233    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0926 17:53:14.959047    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0926 17:53:14.967887    4178 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0926 17:53:14.975975    4178 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0926 17:53:14.976018    4178 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0926 17:53:14.985185    4178 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0926 17:53:14.993181    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:15.086628    4178 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0926 17:53:15.106310    4178 start.go:495] detecting cgroup driver to use...
	I0926 17:53:15.106396    4178 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0926 17:53:15.118546    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 17:53:15.129665    4178 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0926 17:53:15.143061    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 17:53:15.154154    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 17:53:15.164978    4178 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0926 17:53:15.188125    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 17:53:15.199509    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 17:53:15.214608    4178 ssh_runner.go:195] Run: which cri-dockerd
	I0926 17:53:15.217523    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0926 17:53:15.225391    4178 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0926 17:53:15.238858    4178 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0926 17:53:15.337444    4178 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0926 17:53:15.437802    4178 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0926 17:53:15.437879    4178 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0926 17:53:15.451733    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:15.563208    4178 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0926 17:53:17.891140    4178 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.327906141s)
	I0926 17:53:17.891209    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0926 17:53:17.902729    4178 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0926 17:53:17.915694    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0926 17:53:17.926164    4178 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0926 17:53:18.028587    4178 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0926 17:53:18.135687    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:18.246049    4178 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0926 17:53:18.259788    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0926 17:53:18.270995    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:18.379007    4178 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0926 17:53:18.442458    4178 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0926 17:53:18.442555    4178 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0926 17:53:18.447167    4178 start.go:563] Will wait 60s for crictl version
	I0926 17:53:18.447233    4178 ssh_runner.go:195] Run: which crictl
	I0926 17:53:18.450364    4178 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0926 17:53:18.474973    4178 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I0926 17:53:18.475082    4178 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0926 17:53:18.492744    4178 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0926 17:53:18.534852    4178 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I0926 17:53:18.534897    4178 main.go:141] libmachine: (ha-476000) Calling .GetIP
	I0926 17:53:18.535304    4178 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0926 17:53:18.539884    4178 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 17:53:18.549924    4178 kubeadm.go:883] updating cluster {Name:ha-476000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:ha-476000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0926 17:53:18.550017    4178 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 17:53:18.550087    4178 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0926 17:53:18.562413    4178 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0926 17:53:18.562429    4178 docker.go:615] Images already preloaded, skipping extraction
	I0926 17:53:18.562517    4178 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0926 17:53:18.574107    4178 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0926 17:53:18.574127    4178 cache_images.go:84] Images are preloaded, skipping loading
	I0926 17:53:18.574137    4178 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.1 docker true true} ...
	I0926 17:53:18.574213    4178 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-476000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-476000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0926 17:53:18.574296    4178 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0926 17:53:18.611557    4178 cni.go:84] Creating CNI manager for ""
	I0926 17:53:18.611571    4178 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0926 17:53:18.611586    4178 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0926 17:53:18.611607    4178 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-476000 NodeName:ha-476000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0926 17:53:18.611700    4178 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-476000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0926 17:53:18.611713    4178 kube-vip.go:115] generating kube-vip config ...
	I0926 17:53:18.611769    4178 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0926 17:53:18.624452    4178 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0926 17:53:18.624524    4178 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0926 17:53:18.624583    4178 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0926 17:53:18.632661    4178 binaries.go:44] Found k8s binaries, skipping transfer
	I0926 17:53:18.632722    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0926 17:53:18.640016    4178 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0926 17:53:18.653424    4178 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0926 17:53:18.666861    4178 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0926 17:53:18.680665    4178 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0926 17:53:18.694237    4178 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0926 17:53:18.697273    4178 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 17:53:18.706489    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:18.799127    4178 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 17:53:18.813428    4178 certs.go:68] Setting up /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000 for IP: 192.169.0.5
	I0926 17:53:18.813441    4178 certs.go:194] generating shared ca certs ...
	I0926 17:53:18.813450    4178 certs.go:226] acquiring lock for ca certs: {Name:mk3d4665cb5756ae49ea6f7cd2a58d273aecc507 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:53:18.813627    4178 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.key
	I0926 17:53:18.813697    4178 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.key
	I0926 17:53:18.813709    4178 certs.go:256] generating profile certs ...
	I0926 17:53:18.813816    4178 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/client.key
	I0926 17:53:18.813837    4178 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key.961e0ed9
	I0926 17:53:18.813853    4178 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt.961e0ed9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
	I0926 17:53:19.198737    4178 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt.961e0ed9 ...
	I0926 17:53:19.198759    4178 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt.961e0ed9: {Name:mkf72026f41cf052c5981dfd73bcc3ea46813a38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:53:19.199347    4178 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key.961e0ed9 ...
	I0926 17:53:19.199358    4178 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key.961e0ed9: {Name:mkb6fc9895bd700bb149434e702cedd545112b66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:53:19.199565    4178 certs.go:381] copying /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt.961e0ed9 -> /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt
	I0926 17:53:19.199778    4178 certs.go:385] copying /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key.961e0ed9 -> /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key
	I0926 17:53:19.200020    4178 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.key
	I0926 17:53:19.200030    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0926 17:53:19.200052    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0926 17:53:19.200071    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0926 17:53:19.200089    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0926 17:53:19.200107    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0926 17:53:19.200125    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0926 17:53:19.200142    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0926 17:53:19.200160    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0926 17:53:19.200250    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679.pem (1338 bytes)
	W0926 17:53:19.200297    4178 certs.go:480] ignoring /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679_empty.pem, impossibly tiny 0 bytes
	I0926 17:53:19.200306    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem (1675 bytes)
	I0926 17:53:19.200335    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem (1082 bytes)
	I0926 17:53:19.200365    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem (1123 bytes)
	I0926 17:53:19.200393    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem (1675 bytes)
	I0926 17:53:19.200455    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem (1708 bytes)
	I0926 17:53:19.200488    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679.pem -> /usr/share/ca-certificates/1679.pem
	I0926 17:53:19.200508    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> /usr/share/ca-certificates/16792.pem
	I0926 17:53:19.200526    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:53:19.200943    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0926 17:53:19.229781    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0926 17:53:19.249730    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0926 17:53:19.269922    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0926 17:53:19.290358    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0926 17:53:19.309964    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0926 17:53:19.329782    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0926 17:53:19.349170    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0926 17:53:19.368557    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679.pem --> /usr/share/ca-certificates/1679.pem (1338 bytes)
	I0926 17:53:19.388315    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem --> /usr/share/ca-certificates/16792.pem (1708 bytes)
	I0926 17:53:19.407646    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0926 17:53:19.427156    4178 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0926 17:53:19.441065    4178 ssh_runner.go:195] Run: openssl version
	I0926 17:53:19.445301    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1679.pem && ln -fs /usr/share/ca-certificates/1679.pem /etc/ssl/certs/1679.pem"
	I0926 17:53:19.453728    4178 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1679.pem
	I0926 17:53:19.457317    4178 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:36 /usr/share/ca-certificates/1679.pem
	I0926 17:53:19.457357    4178 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1679.pem
	I0926 17:53:19.461742    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1679.pem /etc/ssl/certs/51391683.0"
	I0926 17:53:19.470198    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16792.pem && ln -fs /usr/share/ca-certificates/16792.pem /etc/ssl/certs/16792.pem"
	I0926 17:53:19.478616    4178 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16792.pem
	I0926 17:53:19.482140    4178 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:36 /usr/share/ca-certificates/16792.pem
	I0926 17:53:19.482201    4178 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16792.pem
	I0926 17:53:19.486473    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16792.pem /etc/ssl/certs/3ec20f2e.0"
	I0926 17:53:19.494777    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0926 17:53:19.503295    4178 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:53:19.506902    4178 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:14 /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:53:19.506943    4178 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:53:19.511360    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0926 17:53:19.519826    4178 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0926 17:53:19.523465    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0926 17:53:19.528006    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0926 17:53:19.532444    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0926 17:53:19.537126    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0926 17:53:19.541512    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0926 17:53:19.545827    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0926 17:53:19.550166    4178 kubeadm.go:392] StartCluster: {Name:ha-476000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-476000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 17:53:19.550298    4178 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0926 17:53:19.561803    4178 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0926 17:53:19.569639    4178 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0926 17:53:19.569650    4178 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0926 17:53:19.569698    4178 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0926 17:53:19.577403    4178 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0926 17:53:19.577718    4178 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-476000" does not appear in /Users/jenkins/minikube-integration/19711-1128/kubeconfig
	I0926 17:53:19.577801    4178 kubeconfig.go:62] /Users/jenkins/minikube-integration/19711-1128/kubeconfig needs updating (will repair): [kubeconfig missing "ha-476000" cluster setting kubeconfig missing "ha-476000" context setting]
	I0926 17:53:19.577967    4178 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1128/kubeconfig: {Name:mkb9c069c578c490d8cb734b05a21821e9215482 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:53:19.578378    4178 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19711-1128/kubeconfig
	I0926 17:53:19.578577    4178 kapi.go:59] client config for ha-476000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/client.key", CAFile:"/Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7459f00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0926 17:53:19.578890    4178 cert_rotation.go:140] Starting client certificate rotation controller
	I0926 17:53:19.579075    4178 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0926 17:53:19.586457    4178 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0926 17:53:19.586468    4178 kubeadm.go:597] duration metric: took 16.814329ms to restartPrimaryControlPlane
	I0926 17:53:19.586474    4178 kubeadm.go:394] duration metric: took 36.313109ms to StartCluster
	I0926 17:53:19.586484    4178 settings.go:142] acquiring lock: {Name:mka8948d0f70add5c5f20f2eca7124a97a496c1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:53:19.586556    4178 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19711-1128/kubeconfig
	I0926 17:53:19.586877    4178 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1128/kubeconfig: {Name:mkb9c069c578c490d8cb734b05a21821e9215482 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:53:19.587096    4178 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 17:53:19.587108    4178 start.go:241] waiting for startup goroutines ...
	I0926 17:53:19.587128    4178 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0926 17:53:19.587252    4178 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:53:19.629430    4178 out.go:177] * Enabled addons: 
	I0926 17:53:19.650423    4178 addons.go:510] duration metric: took 63.269239ms for enable addons: enabled=[]
	I0926 17:53:19.650464    4178 start.go:246] waiting for cluster config update ...
	I0926 17:53:19.650475    4178 start.go:255] writing updated cluster config ...
	I0926 17:53:19.672508    4178 out.go:201] 
	I0926 17:53:19.693989    4178 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:53:19.694118    4178 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/config.json ...
	I0926 17:53:19.716427    4178 out.go:177] * Starting "ha-476000-m02" control-plane node in "ha-476000" cluster
	I0926 17:53:19.758555    4178 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 17:53:19.758588    4178 cache.go:56] Caching tarball of preloaded images
	I0926 17:53:19.758767    4178 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0926 17:53:19.758785    4178 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 17:53:19.758898    4178 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/config.json ...
	I0926 17:53:19.759817    4178 start.go:360] acquireMachinesLock for ha-476000-m02: {Name:mk62b0ec9b6170d1971239573141a625c4a2621e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 17:53:19.759922    4178 start.go:364] duration metric: took 80.364µs to acquireMachinesLock for "ha-476000-m02"
	I0926 17:53:19.759947    4178 start.go:96] Skipping create...Using existing machine configuration
	I0926 17:53:19.759956    4178 fix.go:54] fixHost starting: m02
	I0926 17:53:19.760406    4178 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:53:19.760442    4178 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:53:19.769605    4178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52063
	I0926 17:53:19.770014    4178 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:53:19.770353    4178 main.go:141] libmachine: Using API Version  1
	I0926 17:53:19.770365    4178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:53:19.770608    4178 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:53:19.770743    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	I0926 17:53:19.770835    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetState
	I0926 17:53:19.770922    4178 main.go:141] libmachine: (ha-476000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:53:19.771000    4178 main.go:141] libmachine: (ha-476000-m02) DBG | hyperkit pid from json: 4002
	I0926 17:53:19.771916    4178 main.go:141] libmachine: (ha-476000-m02) DBG | hyperkit pid 4002 missing from process table
	I0926 17:53:19.771940    4178 fix.go:112] recreateIfNeeded on ha-476000-m02: state=Stopped err=<nil>
	I0926 17:53:19.771957    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	W0926 17:53:19.772037    4178 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 17:53:19.814436    4178 out.go:177] * Restarting existing hyperkit VM for "ha-476000-m02" ...
	I0926 17:53:19.835535    4178 main.go:141] libmachine: (ha-476000-m02) Calling .Start
	I0926 17:53:19.835810    4178 main.go:141] libmachine: (ha-476000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:53:19.835874    4178 main.go:141] libmachine: (ha-476000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/hyperkit.pid
	I0926 17:53:19.837665    4178 main.go:141] libmachine: (ha-476000-m02) DBG | hyperkit pid 4002 missing from process table
	I0926 17:53:19.837678    4178 main.go:141] libmachine: (ha-476000-m02) DBG | pid 4002 is in state "Stopped"
	I0926 17:53:19.837694    4178 main.go:141] libmachine: (ha-476000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/hyperkit.pid...
	I0926 17:53:19.838041    4178 main.go:141] libmachine: (ha-476000-m02) DBG | Using UUID 58f499c4-942a-445b-bae0-ab27a7b8106e
	I0926 17:53:19.865707    4178 main.go:141] libmachine: (ha-476000-m02) DBG | Generated MAC 9e:5:36:80:93:e3
	I0926 17:53:19.865728    4178 main.go:141] libmachine: (ha-476000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000
	I0926 17:53:19.865872    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"58f499c4-942a-445b-bae0-ab27a7b8106e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aca80)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0926 17:53:19.865901    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"58f499c4-942a-445b-bae0-ab27a7b8106e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aca80)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0926 17:53:19.865946    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "58f499c4-942a-445b-bae0-ab27a7b8106e", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/ha-476000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/bzimage,/Users/jenkins/minikube-integration/19711-1128/.minikube/machine
s/ha-476000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000"}
	I0926 17:53:19.866020    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 58f499c4-942a-445b-bae0-ab27a7b8106e -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/ha-476000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/bzimage,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000"
	I0926 17:53:19.866041    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0926 17:53:19.867306    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 DEBUG: hyperkit: Pid is 4198
	I0926 17:53:19.867704    4178 main.go:141] libmachine: (ha-476000-m02) DBG | Attempt 0
	I0926 17:53:19.867718    4178 main.go:141] libmachine: (ha-476000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:53:19.867787    4178 main.go:141] libmachine: (ha-476000-m02) DBG | hyperkit pid from json: 4198
	I0926 17:53:19.869727    4178 main.go:141] libmachine: (ha-476000-m02) DBG | Searching for 9e:5:36:80:93:e3 in /var/db/dhcpd_leases ...
	I0926 17:53:19.869759    4178 main.go:141] libmachine: (ha-476000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0926 17:53:19.869772    4178 main.go:141] libmachine: (ha-476000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 17:53:19.869793    4178 main.go:141] libmachine: (ha-476000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 17:53:19.869821    4178 main.go:141] libmachine: (ha-476000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f751f8}
	I0926 17:53:19.869834    4178 main.go:141] libmachine: (ha-476000-m02) DBG | Found match: 9e:5:36:80:93:e3
	I0926 17:53:19.869848    4178 main.go:141] libmachine: (ha-476000-m02) DBG | IP: 192.169.0.6
	I0926 17:53:19.869914    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetConfigRaw
	I0926 17:53:19.870579    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetIP
	I0926 17:53:19.870762    4178 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/config.json ...
	I0926 17:53:19.871158    4178 machine.go:93] provisionDockerMachine start ...
	I0926 17:53:19.871172    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	I0926 17:53:19.871294    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:19.871392    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:19.871530    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:19.871631    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:19.871718    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:19.871893    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:19.872031    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0926 17:53:19.872038    4178 main.go:141] libmachine: About to run SSH command:
	hostname
	I0926 17:53:19.875766    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I0926 17:53:19.884496    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0926 17:53:19.885379    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 17:53:19.885391    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 17:53:19.885398    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 17:53:19.885403    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 17:53:20.270703    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:20 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0926 17:53:20.270718    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:20 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0926 17:53:20.385412    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 17:53:20.385431    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 17:53:20.385441    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 17:53:20.385468    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 17:53:20.386358    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:20 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0926 17:53:20.386369    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:20 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0926 17:53:25.988386    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:25 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0926 17:53:25.988424    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:25 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0926 17:53:25.988435    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:25 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0926 17:53:26.012163    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:26 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0926 17:53:30.140708    4178 main.go:141] libmachine: Error dialing TCP: dial tcp 192.169.0.6:22: connect: connection refused
	I0926 17:53:33.199866    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0926 17:53:33.199881    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetMachineName
	I0926 17:53:33.200004    4178 buildroot.go:166] provisioning hostname "ha-476000-m02"
	I0926 17:53:33.200013    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetMachineName
	I0926 17:53:33.200123    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:33.200213    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:33.200322    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.200426    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.200540    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:33.200702    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:33.200858    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0926 17:53:33.200867    4178 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-476000-m02 && echo "ha-476000-m02" | sudo tee /etc/hostname
	I0926 17:53:33.269037    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-476000-m02
	
	I0926 17:53:33.269056    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:33.269193    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:33.269285    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.269368    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.269450    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:33.269573    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:33.269735    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0926 17:53:33.269746    4178 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-476000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-476000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-476000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0926 17:53:33.331289    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 17:53:33.331305    4178 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19711-1128/.minikube CaCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19711-1128/.minikube}
	I0926 17:53:33.331314    4178 buildroot.go:174] setting up certificates
	I0926 17:53:33.331321    4178 provision.go:84] configureAuth start
	I0926 17:53:33.331328    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetMachineName
	I0926 17:53:33.331463    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetIP
	I0926 17:53:33.331556    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:33.331643    4178 provision.go:143] copyHostCerts
	I0926 17:53:33.331674    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem
	I0926 17:53:33.331734    4178 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem, removing ...
	I0926 17:53:33.331740    4178 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem
	I0926 17:53:33.331856    4178 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem (1082 bytes)
	I0926 17:53:33.332044    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem
	I0926 17:53:33.332093    4178 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem, removing ...
	I0926 17:53:33.332098    4178 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem
	I0926 17:53:33.332176    4178 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem (1123 bytes)
	I0926 17:53:33.332314    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem
	I0926 17:53:33.332352    4178 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem, removing ...
	I0926 17:53:33.332356    4178 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem
	I0926 17:53:33.332427    4178 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem (1675 bytes)
	I0926 17:53:33.332570    4178 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem org=jenkins.ha-476000-m02 san=[127.0.0.1 192.169.0.6 ha-476000-m02 localhost minikube]
	I0926 17:53:33.395607    4178 provision.go:177] copyRemoteCerts
	I0926 17:53:33.395696    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0926 17:53:33.395715    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:33.395906    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:33.396015    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.396100    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:33.396196    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/id_rsa Username:docker}
	I0926 17:53:33.431740    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0926 17:53:33.431806    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0926 17:53:33.452053    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0926 17:53:33.452106    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0926 17:53:33.471760    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0926 17:53:33.471825    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0926 17:53:33.490896    4178 provision.go:87] duration metric: took 159.567474ms to configureAuth
	I0926 17:53:33.490910    4178 buildroot.go:189] setting minikube options for container-runtime
	I0926 17:53:33.491086    4178 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:53:33.491099    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	I0926 17:53:33.491231    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:33.491321    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:33.491413    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.491498    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.491591    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:33.491713    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:33.491847    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0926 17:53:33.491854    4178 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0926 17:53:33.547403    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0926 17:53:33.547417    4178 buildroot.go:70] root file system type: tmpfs
	I0926 17:53:33.547504    4178 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0926 17:53:33.547518    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:33.547665    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:33.547775    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.547896    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.547997    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:33.548125    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:33.548268    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0926 17:53:33.548312    4178 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0926 17:53:33.613348    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0926 17:53:33.613367    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:33.613495    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:33.613582    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.613661    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.613747    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:33.613879    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:33.614018    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0926 17:53:33.614033    4178 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0926 17:53:35.261247    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0926 17:53:35.261262    4178 machine.go:96] duration metric: took 15.390039559s to provisionDockerMachine
	I0926 17:53:35.261270    4178 start.go:293] postStartSetup for "ha-476000-m02" (driver="hyperkit")
	I0926 17:53:35.261294    4178 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0926 17:53:35.261308    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	I0926 17:53:35.261509    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0926 17:53:35.261522    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:35.261612    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:35.261704    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:35.261809    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:35.261922    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/id_rsa Username:docker}
	I0926 17:53:35.302268    4178 ssh_runner.go:195] Run: cat /etc/os-release
	I0926 17:53:35.305656    4178 info.go:137] Remote host: Buildroot 2023.02.9
	I0926 17:53:35.305666    4178 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1128/.minikube/addons for local assets ...
	I0926 17:53:35.305765    4178 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1128/.minikube/files for local assets ...
	I0926 17:53:35.305947    4178 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> 16792.pem in /etc/ssl/certs
	I0926 17:53:35.305953    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> /etc/ssl/certs/16792.pem
	I0926 17:53:35.306171    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0926 17:53:35.314020    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem --> /etc/ssl/certs/16792.pem (1708 bytes)
	I0926 17:53:35.344643    4178 start.go:296] duration metric: took 83.349532ms for postStartSetup
	I0926 17:53:35.344681    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	I0926 17:53:35.344863    4178 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0926 17:53:35.344877    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:35.344965    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:35.345056    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:35.345137    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:35.345223    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/id_rsa Username:docker}
	I0926 17:53:35.381164    4178 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0926 17:53:35.381229    4178 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0926 17:53:35.414571    4178 fix.go:56] duration metric: took 15.654555871s for fixHost
	I0926 17:53:35.414597    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:35.414747    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:35.414839    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:35.414932    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:35.415022    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:35.415156    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:35.415295    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0926 17:53:35.415302    4178 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0926 17:53:35.472100    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727398415.586409353
	
	I0926 17:53:35.472129    4178 fix.go:216] guest clock: 1727398415.586409353
	I0926 17:53:35.472134    4178 fix.go:229] Guest: 2024-09-26 17:53:35.586409353 -0700 PDT Remote: 2024-09-26 17:53:35.414586 -0700 PDT m=+34.982399519 (delta=171.823353ms)
	I0926 17:53:35.472150    4178 fix.go:200] guest clock delta is within tolerance: 171.823353ms
	I0926 17:53:35.472153    4178 start.go:83] releasing machines lock for "ha-476000-m02", held for 15.712162695s
	I0926 17:53:35.472170    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	I0926 17:53:35.472305    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetIP
	I0926 17:53:35.513568    4178 out.go:177] * Found network options:
	I0926 17:53:35.535552    4178 out.go:177]   - NO_PROXY=192.169.0.5
	W0926 17:53:35.557416    4178 proxy.go:119] fail to check proxy env: Error ip not in block
	I0926 17:53:35.557455    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	I0926 17:53:35.558341    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	I0926 17:53:35.558579    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	I0926 17:53:35.558709    4178 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0926 17:53:35.558764    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	W0926 17:53:35.558835    4178 proxy.go:119] fail to check proxy env: Error ip not in block
	I0926 17:53:35.558964    4178 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0926 17:53:35.558985    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:35.559000    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:35.559215    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:35.559232    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:35.559433    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:35.559464    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:35.559662    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:35.559681    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/id_rsa Username:docker}
	I0926 17:53:35.559790    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/id_rsa Username:docker}
	W0926 17:53:35.596059    4178 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0926 17:53:35.596139    4178 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0926 17:53:35.610162    4178 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0926 17:53:35.610178    4178 start.go:495] detecting cgroup driver to use...
	I0926 17:53:35.610237    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 17:53:35.646709    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0926 17:53:35.656640    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0926 17:53:35.665578    4178 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0926 17:53:35.665623    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0926 17:53:35.674574    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 17:53:35.683489    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0926 17:53:35.692471    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 17:53:35.701275    4178 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0926 17:53:35.710401    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0926 17:53:35.719421    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0926 17:53:35.728448    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0926 17:53:35.738067    4178 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0926 17:53:35.746743    4178 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0926 17:53:35.746802    4178 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0926 17:53:35.755939    4178 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0926 17:53:35.763977    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:35.862563    4178 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0926 17:53:35.881531    4178 start.go:495] detecting cgroup driver to use...
	I0926 17:53:35.881616    4178 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0926 17:53:35.899471    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 17:53:35.910823    4178 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0926 17:53:35.923558    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 17:53:35.935946    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 17:53:35.946007    4178 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0926 17:53:35.969898    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 17:53:35.980115    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 17:53:35.995271    4178 ssh_runner.go:195] Run: which cri-dockerd
	I0926 17:53:35.998508    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0926 17:53:36.005810    4178 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0926 17:53:36.019492    4178 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0926 17:53:36.116976    4178 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0926 17:53:36.228090    4178 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0926 17:53:36.228117    4178 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0926 17:53:36.242164    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:36.335597    4178 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0926 17:53:38.678847    4178 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.343223137s)
	I0926 17:53:38.678917    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0926 17:53:38.689531    4178 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0926 17:53:38.702816    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0926 17:53:38.713151    4178 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0926 17:53:38.819068    4178 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0926 17:53:38.926667    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:39.040074    4178 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0926 17:53:39.054197    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0926 17:53:39.065256    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:39.163219    4178 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0926 17:53:39.228416    4178 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0926 17:53:39.228518    4178 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0926 17:53:39.233191    4178 start.go:563] Will wait 60s for crictl version
	I0926 17:53:39.233249    4178 ssh_runner.go:195] Run: which crictl
	I0926 17:53:39.236580    4178 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0926 17:53:39.262407    4178 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I0926 17:53:39.262495    4178 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0926 17:53:39.279010    4178 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0926 17:53:39.317905    4178 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I0926 17:53:39.359545    4178 out.go:177]   - env NO_PROXY=192.169.0.5
	I0926 17:53:39.381103    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetIP
	I0926 17:53:39.381320    4178 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0926 17:53:39.384579    4178 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 17:53:39.394395    4178 mustload.go:65] Loading cluster: ha-476000
	I0926 17:53:39.394560    4178 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:53:39.394810    4178 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:53:39.394834    4178 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:53:39.403482    4178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52086
	I0926 17:53:39.403823    4178 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:53:39.404150    4178 main.go:141] libmachine: Using API Version  1
	I0926 17:53:39.404164    4178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:53:39.404434    4178 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:53:39.404542    4178 main.go:141] libmachine: (ha-476000) Calling .GetState
	I0926 17:53:39.404632    4178 main.go:141] libmachine: (ha-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:53:39.404706    4178 main.go:141] libmachine: (ha-476000) DBG | hyperkit pid from json: 4191
	I0926 17:53:39.405678    4178 host.go:66] Checking if "ha-476000" exists ...
	I0926 17:53:39.405956    4178 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:53:39.405986    4178 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:53:39.414686    4178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52088
	I0926 17:53:39.415056    4178 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:53:39.415379    4178 main.go:141] libmachine: Using API Version  1
	I0926 17:53:39.415388    4178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:53:39.415605    4178 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:53:39.415728    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:39.415830    4178 certs.go:68] Setting up /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000 for IP: 192.169.0.6
	I0926 17:53:39.415836    4178 certs.go:194] generating shared ca certs ...
	I0926 17:53:39.415849    4178 certs.go:226] acquiring lock for ca certs: {Name:mk3d4665cb5756ae49ea6f7cd2a58d273aecc507 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:53:39.416032    4178 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.key
	I0926 17:53:39.416108    4178 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.key
	I0926 17:53:39.416119    4178 certs.go:256] generating profile certs ...
	I0926 17:53:39.416243    4178 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/client.key
	I0926 17:53:39.416331    4178 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key.462632c0
	I0926 17:53:39.416399    4178 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.key
	I0926 17:53:39.416406    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0926 17:53:39.416427    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0926 17:53:39.416446    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0926 17:53:39.416465    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0926 17:53:39.416482    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0926 17:53:39.416510    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0926 17:53:39.416544    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0926 17:53:39.416564    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0926 17:53:39.416666    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679.pem (1338 bytes)
	W0926 17:53:39.416716    4178 certs.go:480] ignoring /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679_empty.pem, impossibly tiny 0 bytes
	I0926 17:53:39.416725    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem (1675 bytes)
	I0926 17:53:39.416762    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem (1082 bytes)
	I0926 17:53:39.416795    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem (1123 bytes)
	I0926 17:53:39.416828    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem (1675 bytes)
	I0926 17:53:39.416893    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem (1708 bytes)
	I0926 17:53:39.416929    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:53:39.416949    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679.pem -> /usr/share/ca-certificates/1679.pem
	I0926 17:53:39.416967    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> /usr/share/ca-certificates/16792.pem
	I0926 17:53:39.416991    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:39.417078    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:39.417153    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:39.417237    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:39.417320    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/id_rsa Username:docker}
	I0926 17:53:39.447975    4178 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0926 17:53:39.451073    4178 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0926 17:53:39.458912    4178 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0926 17:53:39.462003    4178 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0926 17:53:39.470783    4178 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0926 17:53:39.473836    4178 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0926 17:53:39.481537    4178 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0926 17:53:39.484645    4178 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0926 17:53:39.492945    4178 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0926 17:53:39.495978    4178 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0926 17:53:39.503610    4178 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0926 17:53:39.506808    4178 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0926 17:53:39.514787    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0926 17:53:39.534891    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0926 17:53:39.554745    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0926 17:53:39.574668    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0926 17:53:39.594523    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0926 17:53:39.614131    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0926 17:53:39.633606    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0926 17:53:39.653376    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0926 17:53:39.673369    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0926 17:53:39.692952    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679.pem --> /usr/share/ca-certificates/1679.pem (1338 bytes)
	I0926 17:53:39.712634    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem --> /usr/share/ca-certificates/16792.pem (1708 bytes)
	I0926 17:53:39.732005    4178 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0926 17:53:39.745464    4178 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0926 17:53:39.759232    4178 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0926 17:53:39.772911    4178 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0926 17:53:39.786441    4178 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0926 17:53:39.800266    4178 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0926 17:53:39.813927    4178 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0926 17:53:39.827332    4178 ssh_runner.go:195] Run: openssl version
	I0926 17:53:39.831566    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16792.pem && ln -fs /usr/share/ca-certificates/16792.pem /etc/ssl/certs/16792.pem"
	I0926 17:53:39.839850    4178 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16792.pem
	I0926 17:53:39.843163    4178 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:36 /usr/share/ca-certificates/16792.pem
	I0926 17:53:39.843206    4178 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16792.pem
	I0926 17:53:39.847374    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16792.pem /etc/ssl/certs/3ec20f2e.0"
	I0926 17:53:39.855624    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0926 17:53:39.863965    4178 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:53:39.867400    4178 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:14 /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:53:39.867452    4178 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:53:39.871715    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0926 17:53:39.879907    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1679.pem && ln -fs /usr/share/ca-certificates/1679.pem /etc/ssl/certs/1679.pem"
	I0926 17:53:39.888247    4178 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1679.pem
	I0926 17:53:39.891606    4178 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:36 /usr/share/ca-certificates/1679.pem
	I0926 17:53:39.891654    4178 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1679.pem
	I0926 17:53:39.895855    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1679.pem /etc/ssl/certs/51391683.0"
	I0926 17:53:39.904043    4178 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0926 17:53:39.907450    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0926 17:53:39.911778    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0926 17:53:39.915909    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0926 17:53:39.920037    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0926 17:53:39.924167    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0926 17:53:39.928372    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0926 17:53:39.932543    4178 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.1 docker true true} ...
	I0926 17:53:39.932604    4178 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-476000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-476000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0926 17:53:39.932624    4178 kube-vip.go:115] generating kube-vip config ...
	I0926 17:53:39.932670    4178 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0926 17:53:39.944715    4178 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0926 17:53:39.944753    4178 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0926 17:53:39.944822    4178 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0926 17:53:39.953541    4178 binaries.go:44] Found k8s binaries, skipping transfer
	I0926 17:53:39.953597    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0926 17:53:39.961618    4178 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0926 17:53:39.975007    4178 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0926 17:53:39.988472    4178 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0926 17:53:40.002021    4178 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0926 17:53:40.004933    4178 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 17:53:40.015059    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:40.118867    4178 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 17:53:40.133377    4178 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 17:53:40.133568    4178 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:53:40.154757    4178 out.go:177] * Verifying Kubernetes components...
	I0926 17:53:40.196346    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:40.323445    4178 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 17:53:40.338817    4178 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19711-1128/kubeconfig
	I0926 17:53:40.339037    4178 kapi.go:59] client config for ha-476000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/client.key", CAFile:"/Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7459f00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0926 17:53:40.339084    4178 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0926 17:53:40.339280    4178 node_ready.go:35] waiting up to 6m0s for node "ha-476000-m02" to be "Ready" ...
	I0926 17:53:40.339354    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:40.339359    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:40.339366    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:40.339369    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:47.201921    4178 round_trippers.go:574] Response Status:  in 6862 milliseconds
	I0926 17:53:48.202681    4178 with_retry.go:234] Got a Retry-After 1s response for attempt 1 to https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:48.202709    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:48.202713    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:48.202720    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:48.202724    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:49.203128    4178 round_trippers.go:574] Response Status:  in 1000 milliseconds
	I0926 17:53:49.203194    4178 node_ready.go:53] error getting node "ha-476000-m02": Get "https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.1:52091->192.169.0.5:8443: read: connection reset by peer
	I0926 17:53:49.203240    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:49.203247    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:49.203252    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:49.203256    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:50.204478    4178 round_trippers.go:574] Response Status:  in 1001 milliseconds
	I0926 17:53:50.204619    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:50.204631    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:50.204642    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:50.204649    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:51.204974    4178 round_trippers.go:574] Response Status:  in 1000 milliseconds
	I0926 17:53:51.205045    4178 node_ready.go:53] error getting node "ha-476000-m02": Get "https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02": dial tcp 192.169.0.5:8443: connect: connection refused
	I0926 17:53:51.205098    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:51.205108    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:51.205118    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:51.205124    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:52.205352    4178 round_trippers.go:574] Response Status:  in 1000 milliseconds
	I0926 17:53:52.205474    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:52.205485    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:52.205496    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:52.205505    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:53.206703    4178 round_trippers.go:574] Response Status:  in 1001 milliseconds
	I0926 17:53:53.206766    4178 node_ready.go:53] error getting node "ha-476000-m02": Get "https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02": dial tcp 192.169.0.5:8443: connect: connection refused
	I0926 17:53:53.206822    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:53.206831    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:53.206843    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:53.206849    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:54.208032    4178 round_trippers.go:574] Response Status:  in 1001 milliseconds
	I0926 17:53:54.208160    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:54.208172    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:54.208183    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:54.208190    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:55.208420    4178 round_trippers.go:574] Response Status:  in 1000 milliseconds
	I0926 17:53:55.208484    4178 node_ready.go:53] error getting node "ha-476000-m02": Get "https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02": dial tcp 192.169.0.5:8443: connect: connection refused
	I0926 17:53:55.208561    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:55.208572    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:55.208582    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:55.208586    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:56.209388    4178 round_trippers.go:574] Response Status:  in 1000 milliseconds
	I0926 17:53:56.209496    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:56.209507    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:56.209517    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:56.209529    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:57.211492    4178 round_trippers.go:574] Response Status:  in 1001 milliseconds
	I0926 17:53:57.211560    4178 node_ready.go:53] error getting node "ha-476000-m02": Get "https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02": dial tcp 192.169.0.5:8443: connect: connection refused
	I0926 17:53:57.211643    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:57.211654    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:57.211665    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:57.211671    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:58.213441    4178 round_trippers.go:574] Response Status:  in 1001 milliseconds
	I0926 17:53:58.213520    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:58.213528    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:58.213535    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:58.213538    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:59.215627    4178 round_trippers.go:574] Response Status:  in 1002 milliseconds
	I0926 17:53:59.215689    4178 node_ready.go:53] error getting node "ha-476000-m02": Get "https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02": dial tcp 192.169.0.5:8443: connect: connection refused
	I0926 17:53:59.215761    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:59.215770    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:59.215781    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:59.215792    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:00.214970    4178 round_trippers.go:574] Response Status:  in 999 milliseconds
	I0926 17:54:00.215057    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:00.215066    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:00.215072    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:00.215075    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:02.766651    4178 round_trippers.go:574] Response Status: 200 OK in 2551 milliseconds
	I0926 17:54:02.767320    4178 node_ready.go:53] node "ha-476000-m02" has status "Ready":"False"
	I0926 17:54:02.767364    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:02.767371    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:02.767378    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:02.767382    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:02.808455    4178 round_trippers.go:574] Response Status: 200 OK in 41 milliseconds
	I0926 17:54:02.839499    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:02.839515    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:02.839522    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:02.839524    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:02.844502    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:03.339950    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:03.339974    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:03.340014    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:03.340033    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:03.343931    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:03.839836    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:03.839849    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:03.839855    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:03.839859    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:03.842811    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:04.340378    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:04.340403    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:04.340414    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:04.340421    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:04.344418    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:04.839736    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:04.839752    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:04.839758    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:04.839762    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:04.842629    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:04.843116    4178 node_ready.go:49] node "ha-476000-m02" has status "Ready":"True"
	I0926 17:54:04.843129    4178 node_ready.go:38] duration metric: took 24.503742617s for node "ha-476000-m02" to be "Ready" ...
	I0926 17:54:04.843136    4178 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0926 17:54:04.843170    4178 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0926 17:54:04.843178    4178 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0926 17:54:04.843227    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0926 17:54:04.843232    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:04.843238    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:04.843242    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:04.851447    4178 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0926 17:54:04.858185    4178 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:04.858238    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:04.858243    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:04.858250    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:04.858254    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:04.860121    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:04.860597    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:04.860608    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:04.860614    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:04.860619    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:04.862704    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:05.358322    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:05.358334    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:05.358341    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:05.358344    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:05.361386    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:05.361939    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:05.361947    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:05.361954    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:05.361958    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:05.366335    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:05.858443    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:05.858462    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:05.858485    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:05.858489    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:05.861181    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:05.861691    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:05.861698    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:05.861704    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:05.861706    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:05.863911    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:06.359311    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:06.359342    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:06.359350    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:06.359354    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:06.362329    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:06.362841    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:06.362848    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:06.362854    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:06.362864    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:06.365951    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:06.860115    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:06.860140    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:06.860152    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:06.860192    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:06.863829    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:06.864356    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:06.864364    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:06.864370    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:06.864372    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:06.866293    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:06.866641    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:07.359755    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:07.359781    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:07.359791    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:07.359796    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:07.362929    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:07.363432    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:07.363440    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:07.363449    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:07.363454    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:07.365354    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:07.859403    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:07.859428    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:07.859440    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:07.859447    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:07.863936    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:07.864482    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:07.864489    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:07.864494    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:07.864497    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:07.866695    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:08.359070    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:08.359095    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:08.359104    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:08.359110    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:08.363413    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:08.363975    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:08.363983    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:08.363989    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:08.363996    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:08.366160    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:08.858562    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:08.858596    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:08.858604    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:08.858608    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:08.861584    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:08.862306    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:08.862313    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:08.862319    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:08.862329    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:08.864555    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:09.359666    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:09.359694    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:09.359706    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:09.359710    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:09.364444    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:09.364796    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:09.364802    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:09.364808    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:09.364812    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:09.367017    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:09.367391    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:09.859578    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:09.859628    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:09.859645    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:09.859654    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:09.863289    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:09.863926    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:09.863934    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:09.863940    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:09.863942    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:09.865998    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:10.358368    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:10.358385    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:10.358391    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:10.358396    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:10.366195    4178 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0926 17:54:10.366734    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:10.366743    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:10.366752    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:10.366755    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:10.369544    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:10.859656    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:10.859683    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:10.859694    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:10.859701    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:10.864043    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:10.864491    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:10.864499    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:10.864504    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:10.864508    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:10.866558    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:11.360000    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:11.360026    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:11.360038    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:11.360045    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:11.364064    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:11.364604    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:11.364611    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:11.364617    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:11.364620    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:11.366561    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:11.859988    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:11.860011    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:11.860023    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:11.860028    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:11.863780    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:11.864488    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:11.864496    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:11.864502    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:11.864505    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:11.866527    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:11.866879    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:12.359231    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:12.359302    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:12.359317    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:12.359325    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:12.363142    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:12.363807    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:12.363815    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:12.363820    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:12.363823    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:12.365720    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:12.859295    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:12.859321    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:12.859332    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:12.859336    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:12.863604    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:12.864232    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:12.864243    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:12.864249    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:12.864252    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:12.866340    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:13.360473    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:13.360500    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:13.360511    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:13.360516    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:13.364925    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:13.365659    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:13.365667    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:13.365672    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:13.365677    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:13.367805    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:13.858451    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:13.858477    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:13.858490    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:13.858495    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:13.862381    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:13.862921    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:13.862929    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:13.862934    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:13.862938    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:13.864941    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:14.358942    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:14.358966    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:14.359005    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:14.359013    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:14.365723    4178 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0926 17:54:14.366181    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:14.366189    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:14.366193    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:14.366197    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:14.368552    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:14.368954    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:14.860475    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:14.860501    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:14.860543    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:14.860550    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:14.864207    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:14.864620    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:14.864628    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:14.864634    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:14.864637    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:14.866896    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:15.358734    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:15.358751    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:15.358757    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:15.358761    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:15.361477    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:15.362047    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:15.362056    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:15.362062    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:15.362072    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:15.364404    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:15.859641    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:15.859669    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:15.859681    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:15.859690    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:15.864301    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:15.864755    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:15.864762    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:15.864767    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:15.864771    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:15.866941    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:16.358689    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:16.358713    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:16.358762    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:16.358771    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:16.363038    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:16.363637    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:16.363644    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:16.363649    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:16.363665    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:16.365580    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:16.858829    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:16.858848    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:16.858857    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:16.858864    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:16.861418    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:16.861895    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:16.861903    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:16.861908    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:16.861913    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:16.864330    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:16.864660    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:17.358538    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:17.358557    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:17.358576    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:17.358581    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:17.361634    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:17.362216    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:17.362224    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:17.362230    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:17.362235    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:17.364368    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:17.858951    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:17.859025    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:17.859068    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:17.859083    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:17.863132    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:17.863643    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:17.863651    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:17.863660    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:17.863665    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:17.865816    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:18.358377    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:18.358396    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:18.358403    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:18.358429    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:18.364859    4178 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0926 17:54:18.365288    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:18.365296    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:18.365303    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:18.365306    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:18.367423    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:18.859211    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:18.859237    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:18.859250    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:18.859257    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:18.863321    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:18.863832    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:18.863840    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:18.863846    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:18.863849    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:18.865860    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:18.866261    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:19.358438    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:19.358453    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:19.358460    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:19.358463    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:19.361068    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:19.361685    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:19.361694    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:19.361700    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:19.361703    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:19.364079    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:19.859935    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:19.859961    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:19.859972    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:19.859979    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:19.864189    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:19.864623    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:19.864630    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:19.864638    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:19.864641    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:19.866680    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:20.359100    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:20.359154    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:20.359164    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:20.359169    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:20.362081    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:20.362587    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:20.362595    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:20.362601    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:20.362604    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:20.364581    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:20.860535    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:20.860561    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:20.860573    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:20.860581    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:20.864595    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:20.865051    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:20.865063    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:20.865070    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:20.865074    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:20.866939    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:20.867377    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:21.358839    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:21.358864    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:21.358910    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:21.358919    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:21.362304    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:21.362899    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:21.362907    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:21.362913    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:21.362923    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:21.364904    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:21.859198    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:21.859224    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:21.859235    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:21.859244    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:21.863464    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:21.863902    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:21.863911    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:21.863916    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:21.863920    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:21.866008    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:22.358500    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:22.358557    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:22.358567    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:22.358581    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:22.363039    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:22.363487    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:22.363495    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:22.363501    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:22.363504    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:22.365560    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:22.860486    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:22.860511    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:22.860523    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:22.860549    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:22.865059    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:22.865691    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:22.865699    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:22.865705    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:22.865708    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:22.867780    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:22.868136    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:23.358997    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:23.359023    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:23.359035    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:23.359043    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:23.363268    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:23.363930    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:23.363938    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:23.363944    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:23.363948    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:23.365982    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:23.858407    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:23.858421    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:23.858452    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:23.858457    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:23.861385    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:23.861801    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:23.861812    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:23.861818    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:23.861823    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:23.864061    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:24.360526    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:24.360553    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:24.360565    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:24.360571    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:24.364721    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:24.365349    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:24.365356    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:24.365362    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:24.365365    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:24.367430    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:24.858605    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:24.858630    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:24.858641    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:24.858648    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:24.862472    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:24.863003    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:24.863010    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:24.863016    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:24.863018    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:24.864908    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:25.358639    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:25.358664    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:25.358677    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:25.358684    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:25.362945    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:25.363487    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:25.363495    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:25.363501    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:25.363503    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:25.365691    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:25.366062    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:25.859315    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:25.859333    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:25.859341    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:25.859364    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:25.862801    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:25.863276    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:25.863284    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:25.863289    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:25.863293    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:25.865685    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:26.359001    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:26.359015    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:26.359021    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:26.359025    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:26.361573    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:26.362094    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:26.362101    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:26.362107    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:26.362111    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:26.364144    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:26.858599    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:26.858625    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:26.858637    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:26.858644    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:26.862247    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:26.862753    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:26.862762    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:26.862767    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:26.862771    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:26.864571    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:27.358862    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:27.358888    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:27.358899    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:27.358904    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:27.363109    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:27.363648    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:27.363657    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:27.363663    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:27.363669    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:27.365500    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:27.859752    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:27.859779    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:27.859790    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:27.859795    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:27.864255    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:27.864725    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:27.864733    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:27.864738    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:27.864741    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:27.866764    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:27.867055    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:28.359808    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:28.359835    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:28.359882    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:28.359890    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:28.363146    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:28.363572    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:28.363579    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:28.363585    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:28.363589    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:28.365498    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:28.858708    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:28.858734    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:28.858746    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:28.858752    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:28.862673    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:28.863231    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:28.863238    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:28.863244    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:28.863248    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:28.865181    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:29.359611    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:29.359640    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:29.359653    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:29.359660    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:29.362965    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:29.363411    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:29.363419    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:29.363425    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:29.363427    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:29.365174    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:29.859384    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:29.859402    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:29.859409    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:29.859414    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:29.862499    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:29.863033    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:29.863041    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:29.863047    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:29.863050    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:29.865154    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:30.359191    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:30.359209    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:30.359255    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:30.359265    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:30.361836    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:30.362303    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:30.362312    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:30.362317    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:30.362320    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:30.364567    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:30.364980    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:30.860033    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:30.860066    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:30.860101    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:30.860109    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:30.864359    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:30.864782    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:30.864790    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:30.864799    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:30.864805    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:30.866798    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:31.358678    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:31.358711    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:31.358762    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:31.358772    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:31.363329    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:31.363731    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:31.363739    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:31.363745    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:31.363751    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:31.365894    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:31.858683    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:31.858706    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:31.858718    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:31.858724    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:31.862717    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:31.863254    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:31.863262    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:31.863268    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:31.863272    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:31.865220    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:32.359370    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:32.359420    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:32.359434    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:32.359442    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:32.362904    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:32.363502    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:32.363510    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:32.363516    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:32.363518    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:32.365729    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:32.366016    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:32.859955    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:32.859990    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:32.859997    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:32.860001    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:32.874510    4178 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0926 17:54:32.875130    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:32.875137    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:32.875142    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:32.875145    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:32.883403    4178 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0926 17:54:33.359964    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:33.360006    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:33.360019    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:33.360025    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:33.362527    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:33.362934    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:33.362942    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:33.362948    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:33.362953    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:33.365277    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:33.860043    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:33.860070    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:33.860082    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:33.860089    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:33.864487    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:33.864960    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:33.864968    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:33.864974    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:33.864978    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:33.866813    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:34.359408    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:34.359422    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:34.359453    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:34.359457    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:34.361843    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:34.362407    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:34.362415    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:34.362419    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:34.362427    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:34.364587    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:34.859087    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:34.859113    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:34.859124    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:34.859132    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:34.863123    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:34.863508    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:34.863516    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:34.863522    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:34.863525    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:34.865516    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:34.865853    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:35.359972    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:35.359997    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:35.360039    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:35.360048    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:35.364311    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:35.364957    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:35.364964    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:35.364970    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:35.364974    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:35.367232    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:35.859251    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:35.859265    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:35.859271    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:35.859275    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:35.861746    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:35.862292    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:35.862304    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:35.862318    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:35.862323    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:35.864289    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:36.360234    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:36.360274    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:36.360284    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:36.360291    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:36.363297    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:36.363726    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:36.363734    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:36.363740    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:36.363743    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:36.365689    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:36.859037    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:36.859105    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:36.859119    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:36.859130    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:36.863205    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:36.863621    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:36.863629    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:36.863635    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:36.863638    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:36.865642    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:36.865933    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:37.359101    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:37.359127    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:37.359139    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:37.359145    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:37.363256    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:37.363851    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:37.363859    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:37.363865    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:37.363868    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:37.365908    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:37.859282    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:37.859308    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:37.859319    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:37.859325    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:37.863341    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:37.863718    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:37.863726    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:37.863731    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:37.863735    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:37.865672    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:38.359013    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:38.359055    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:38.359065    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:38.359070    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:38.361936    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:38.362521    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:38.362529    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:38.362534    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:38.362538    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:38.364699    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:38.859426    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:38.859453    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:38.859466    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:38.859475    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:38.863509    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:38.864012    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:38.864020    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:38.864025    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:38.864029    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:38.866259    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:38.866728    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:39.358730    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:39.358748    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:39.358756    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:39.358765    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:39.362410    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:39.362956    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:39.362964    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:39.362970    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:39.362979    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:39.365004    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:39.858564    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:39.858584    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:39.858592    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:39.858598    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:39.861794    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:39.862200    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:39.862208    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:39.862214    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:39.862219    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:39.864175    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:40.358549    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:40.358586    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.358596    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.358600    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.361533    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:40.362003    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:40.362011    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.362017    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.362020    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.364141    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:40.860048    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:40.860077    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.860087    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.860093    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.863900    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:40.864305    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:40.864314    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.864320    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.864322    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.866266    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:40.866599    4178 pod_ready.go:93] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:40.866610    4178 pod_ready.go:82] duration metric: took 36.008276067s for pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:40.866616    4178 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7jwgv" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:40.866646    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-7jwgv
	I0926 17:54:40.866651    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.866657    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.866661    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.868466    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:40.868930    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:40.868938    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.868944    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.868948    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.870736    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:40.871103    4178 pod_ready.go:93] pod "coredns-7c65d6cfc9-7jwgv" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:40.871111    4178 pod_ready.go:82] duration metric: took 4.489575ms for pod "coredns-7c65d6cfc9-7jwgv" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:40.871118    4178 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-476000" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:40.871146    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-476000
	I0926 17:54:40.871150    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.871156    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.871160    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.873206    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:40.873700    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:40.873707    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.873713    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.873717    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.875461    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:40.875829    4178 pod_ready.go:93] pod "etcd-ha-476000" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:40.875837    4178 pod_ready.go:82] duration metric: took 4.713943ms for pod "etcd-ha-476000" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:40.875844    4178 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-476000-m02" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:40.875875    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-476000-m02
	I0926 17:54:40.875880    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.875885    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.875890    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.877741    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:40.878137    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:40.878145    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.878151    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.878155    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.880023    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:40.880375    4178 pod_ready.go:93] pod "etcd-ha-476000-m02" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:40.880384    4178 pod_ready.go:82] duration metric: took 4.534554ms for pod "etcd-ha-476000-m02" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:40.880390    4178 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-476000-m03" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:40.880419    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-476000-m03
	I0926 17:54:40.880424    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.880429    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.880433    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.882094    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:40.882474    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m03
	I0926 17:54:40.882481    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.882486    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.882496    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.884251    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:40.884613    4178 pod_ready.go:98] node "ha-476000-m03" hosting pod "etcd-ha-476000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:40.884622    4178 pod_ready.go:82] duration metric: took 4.227661ms for pod "etcd-ha-476000-m03" in "kube-system" namespace to be "Ready" ...
	E0926 17:54:40.884628    4178 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-476000-m03" hosting pod "etcd-ha-476000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:40.884638    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-476000" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:41.061560    4178 request.go:632] Waited for 176.87189ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-476000
	I0926 17:54:41.061616    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-476000
	I0926 17:54:41.061655    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:41.061670    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:41.061677    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:41.065303    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:41.262138    4178 request.go:632] Waited for 196.341694ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:41.262261    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:41.262270    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:41.262282    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:41.262290    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:41.266333    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:41.266689    4178 pod_ready.go:93] pod "kube-apiserver-ha-476000" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:41.266699    4178 pod_ready.go:82] duration metric: took 382.053003ms for pod "kube-apiserver-ha-476000" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:41.266705    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-476000-m02" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:41.460472    4178 request.go:632] Waited for 193.723597ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-476000-m02
	I0926 17:54:41.460525    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-476000-m02
	I0926 17:54:41.460535    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:41.460578    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:41.460588    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:41.464471    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:41.661359    4178 request.go:632] Waited for 196.505849ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:41.661462    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:41.661475    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:41.661486    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:41.661494    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:41.665427    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:41.665770    4178 pod_ready.go:93] pod "kube-apiserver-ha-476000-m02" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:41.665780    4178 pod_ready.go:82] duration metric: took 399.068092ms for pod "kube-apiserver-ha-476000-m02" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:41.665789    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-476000-m03" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:41.861535    4178 request.go:632] Waited for 195.701622ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-476000-m03
	I0926 17:54:41.861634    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-476000-m03
	I0926 17:54:41.861648    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:41.861668    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:41.861680    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:41.865792    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:42.061777    4178 request.go:632] Waited for 195.542882ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000-m03
	I0926 17:54:42.061836    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m03
	I0926 17:54:42.061869    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:42.061880    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:42.061888    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:42.066352    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:42.066752    4178 pod_ready.go:98] node "ha-476000-m03" hosting pod "kube-apiserver-ha-476000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:42.066763    4178 pod_ready.go:82] duration metric: took 400.967857ms for pod "kube-apiserver-ha-476000-m03" in "kube-system" namespace to be "Ready" ...
	E0926 17:54:42.066770    4178 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-476000-m03" hosting pod "kube-apiserver-ha-476000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:42.066774    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-476000" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:42.260909    4178 request.go:632] Waited for 194.055971ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-476000
	I0926 17:54:42.260962    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-476000
	I0926 17:54:42.261001    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:42.261021    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:42.261031    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:42.264905    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:42.460758    4178 request.go:632] Waited for 195.327303ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:42.460808    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:42.460816    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:42.460827    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:42.460837    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:42.464434    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:42.464776    4178 pod_ready.go:93] pod "kube-controller-manager-ha-476000" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:42.464786    4178 pod_ready.go:82] duration metric: took 398.004555ms for pod "kube-controller-manager-ha-476000" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:42.464793    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-476000-m02" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:42.660316    4178 request.go:632] Waited for 195.46211ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-476000-m02
	I0926 17:54:42.660458    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-476000-m02
	I0926 17:54:42.660474    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:42.660486    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:42.660494    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:42.665327    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:42.860122    4178 request.go:632] Waited for 194.468161ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:42.860201    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:42.860211    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:42.860222    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:42.860231    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:42.864049    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:42.864456    4178 pod_ready.go:93] pod "kube-controller-manager-ha-476000-m02" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:42.864465    4178 pod_ready.go:82] duration metric: took 399.6655ms for pod "kube-controller-manager-ha-476000-m02" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:42.864473    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-476000-m03" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:43.060814    4178 request.go:632] Waited for 196.258122ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-476000-m03
	I0926 17:54:43.060925    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-476000-m03
	I0926 17:54:43.060935    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:43.060947    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:43.060956    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:43.065088    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:43.261824    4178 request.go:632] Waited for 196.351744ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000-m03
	I0926 17:54:43.261944    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m03
	I0926 17:54:43.261957    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:43.261967    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:43.261984    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:43.266272    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:43.266738    4178 pod_ready.go:98] node "ha-476000-m03" hosting pod "kube-controller-manager-ha-476000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:43.266748    4178 pod_ready.go:82] duration metric: took 402.268136ms for pod "kube-controller-manager-ha-476000-m03" in "kube-system" namespace to be "Ready" ...
	E0926 17:54:43.266762    4178 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-476000-m03" hosting pod "kube-controller-manager-ha-476000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:43.266768    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5d8nb" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:43.460501    4178 request.go:632] Waited for 193.687301ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5d8nb
	I0926 17:54:43.460615    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5d8nb
	I0926 17:54:43.460627    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:43.460639    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:43.460647    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:43.463846    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:43.662152    4178 request.go:632] Waited for 197.799796ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000-m04
	I0926 17:54:43.662296    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m04
	I0926 17:54:43.662312    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:43.662324    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:43.662334    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:43.666430    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:43.666928    4178 pod_ready.go:98] node "ha-476000-m04" hosting pod "kube-proxy-5d8nb" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m04" has status "Ready":"Unknown"
	I0926 17:54:43.666940    4178 pod_ready.go:82] duration metric: took 400.16396ms for pod "kube-proxy-5d8nb" in "kube-system" namespace to be "Ready" ...
	E0926 17:54:43.666946    4178 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-476000-m04" hosting pod "kube-proxy-5d8nb" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m04" has status "Ready":"Unknown"
	I0926 17:54:43.666950    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bpsqv" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:43.860782    4178 request.go:632] Waited for 193.758415ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bpsqv
	I0926 17:54:43.860851    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bpsqv
	I0926 17:54:43.860893    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:43.860905    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:43.860912    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:43.865061    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:44.060850    4178 request.go:632] Waited for 195.218122ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000-m03
	I0926 17:54:44.060902    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m03
	I0926 17:54:44.060920    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:44.060968    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:44.060976    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:44.065008    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:44.065426    4178 pod_ready.go:98] node "ha-476000-m03" hosting pod "kube-proxy-bpsqv" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:44.065437    4178 pod_ready.go:82] duration metric: took 398.480723ms for pod "kube-proxy-bpsqv" in "kube-system" namespace to be "Ready" ...
	E0926 17:54:44.065443    4178 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-476000-m03" hosting pod "kube-proxy-bpsqv" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:44.065448    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ctdh4" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:44.260264    4178 request.go:632] Waited for 194.757329ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ctdh4
	I0926 17:54:44.260395    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ctdh4
	I0926 17:54:44.260404    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:44.260417    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:44.260424    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:44.264668    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:44.461295    4178 request.go:632] Waited for 196.119983ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:44.461373    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:44.461384    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:44.461399    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:44.461407    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:44.465035    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:44.465397    4178 pod_ready.go:93] pod "kube-proxy-ctdh4" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:44.465406    4178 pod_ready.go:82] duration metric: took 399.951689ms for pod "kube-proxy-ctdh4" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:44.465413    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nrsx7" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:44.660616    4178 request.go:632] Waited for 195.1575ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrsx7
	I0926 17:54:44.660704    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrsx7
	I0926 17:54:44.660715    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:44.660726    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:44.660734    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:44.664476    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:44.860447    4178 request.go:632] Waited for 195.571151ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:44.860565    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:44.860578    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:44.860588    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:44.860596    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:44.864038    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:44.864554    4178 pod_ready.go:93] pod "kube-proxy-nrsx7" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:44.864566    4178 pod_ready.go:82] duration metric: took 399.145507ms for pod "kube-proxy-nrsx7" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:44.864575    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-476000" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:45.060924    4178 request.go:632] Waited for 196.301993ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-476000
	I0926 17:54:45.061011    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-476000
	I0926 17:54:45.061022    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:45.061034    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:45.061042    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:45.065277    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:45.260098    4178 request.go:632] Waited for 194.412657ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:45.260187    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:45.260208    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:45.260220    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:45.260229    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:45.264296    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:45.264558    4178 pod_ready.go:93] pod "kube-scheduler-ha-476000" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:45.264567    4178 pod_ready.go:82] duration metric: took 399.984402ms for pod "kube-scheduler-ha-476000" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:45.264574    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-476000-m02" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:45.460204    4178 request.go:632] Waited for 195.586272ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-476000-m02
	I0926 17:54:45.460285    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-476000-m02
	I0926 17:54:45.460296    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:45.460307    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:45.460315    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:45.463717    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:45.661528    4178 request.go:632] Waited for 197.284014ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:45.661624    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:45.661634    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:45.661645    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:45.661653    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:45.666080    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:45.666323    4178 pod_ready.go:93] pod "kube-scheduler-ha-476000-m02" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:45.666333    4178 pod_ready.go:82] duration metric: took 401.752851ms for pod "kube-scheduler-ha-476000-m02" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:45.666340    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-476000-m03" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:45.860703    4178 request.go:632] Waited for 194.311899ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-476000-m03
	I0926 17:54:45.860736    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-476000-m03
	I0926 17:54:45.860740    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:45.860746    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:45.860750    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:45.863521    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:46.061792    4178 request.go:632] Waited for 197.829608ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000-m03
	I0926 17:54:46.061901    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m03
	I0926 17:54:46.061915    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:46.061926    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:46.061934    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:46.065839    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:46.066244    4178 pod_ready.go:98] node "ha-476000-m03" hosting pod "kube-scheduler-ha-476000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:46.066255    4178 pod_ready.go:82] duration metric: took 399.908641ms for pod "kube-scheduler-ha-476000-m03" in "kube-system" namespace to be "Ready" ...
	E0926 17:54:46.066262    4178 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-476000-m03" hosting pod "kube-scheduler-ha-476000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:46.066267    4178 pod_ready.go:39] duration metric: took 41.222971189s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0926 17:54:46.066282    4178 api_server.go:52] waiting for apiserver process to appear ...
	I0926 17:54:46.066375    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 17:54:46.079414    4178 logs.go:276] 2 containers: [7aabe646a9fe 3fae3e334c04]
	I0926 17:54:46.079513    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 17:54:46.092379    4178 logs.go:276] 2 containers: [f525de12dc97 bcf266ec7564]
	I0926 17:54:46.092476    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 17:54:46.105011    4178 logs.go:276] 0 containers: []
	W0926 17:54:46.105025    4178 logs.go:278] No container was found matching "coredns"
	I0926 17:54:46.105107    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 17:54:46.118452    4178 logs.go:276] 2 containers: [d9fc15fd82f1 d8ed18933791]
	I0926 17:54:46.118550    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 17:54:46.132316    4178 logs.go:276] 2 containers: [b52376c9cdcd a5a9a7e18064]
	I0926 17:54:46.132402    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 17:54:46.145649    4178 logs.go:276] 2 containers: [350a07550ad1 f82271bde6b0]
	I0926 17:54:46.145746    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 17:54:46.160399    4178 logs.go:276] 2 containers: [b73a2fda347d 981336ad66c6]
	I0926 17:54:46.160426    4178 logs.go:123] Gathering logs for kube-apiserver [7aabe646a9fe] ...
	I0926 17:54:46.160432    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aabe646a9fe"
	I0926 17:54:46.180676    4178 logs.go:123] Gathering logs for kube-apiserver [3fae3e334c04] ...
	I0926 17:54:46.180690    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fae3e334c04"
	I0926 17:54:46.213941    4178 logs.go:123] Gathering logs for kindnet [981336ad66c6] ...
	I0926 17:54:46.213956    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 981336ad66c6"
	I0926 17:54:46.229008    4178 logs.go:123] Gathering logs for container status ...
	I0926 17:54:46.229022    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 17:54:46.263727    4178 logs.go:123] Gathering logs for dmesg ...
	I0926 17:54:46.263743    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 17:54:46.275216    4178 logs.go:123] Gathering logs for etcd [f525de12dc97] ...
	I0926 17:54:46.275229    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f525de12dc97"
	I0926 17:54:46.340546    4178 logs.go:123] Gathering logs for etcd [bcf266ec7564] ...
	I0926 17:54:46.340563    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcf266ec7564"
	I0926 17:54:46.368786    4178 logs.go:123] Gathering logs for kube-scheduler [d8ed18933791] ...
	I0926 17:54:46.368802    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ed18933791"
	I0926 17:54:46.392911    4178 logs.go:123] Gathering logs for kube-proxy [a5a9a7e18064] ...
	I0926 17:54:46.392926    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5a9a7e18064"
	I0926 17:54:46.411685    4178 logs.go:123] Gathering logs for kubelet ...
	I0926 17:54:46.411700    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 17:54:46.453572    4178 logs.go:123] Gathering logs for describe nodes ...
	I0926 17:54:46.453588    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 17:54:46.819319    4178 logs.go:123] Gathering logs for kube-scheduler [d9fc15fd82f1] ...
	I0926 17:54:46.819338    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9fc15fd82f1"
	I0926 17:54:46.834299    4178 logs.go:123] Gathering logs for kube-proxy [b52376c9cdcd] ...
	I0926 17:54:46.834315    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b52376c9cdcd"
	I0926 17:54:46.850264    4178 logs.go:123] Gathering logs for Docker ...
	I0926 17:54:46.850278    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 17:54:46.881220    4178 logs.go:123] Gathering logs for kube-controller-manager [350a07550ad1] ...
	I0926 17:54:46.881233    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 350a07550ad1"
	I0926 17:54:46.915123    4178 logs.go:123] Gathering logs for kube-controller-manager [f82271bde6b0] ...
	I0926 17:54:46.915139    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f82271bde6b0"
	I0926 17:54:46.943154    4178 logs.go:123] Gathering logs for kindnet [b73a2fda347d] ...
	I0926 17:54:46.943169    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b73a2fda347d"
	I0926 17:54:49.459929    4178 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 17:54:49.472910    4178 api_server.go:72] duration metric: took 1m9.339247453s to wait for apiserver process to appear ...
	I0926 17:54:49.472923    4178 api_server.go:88] waiting for apiserver healthz status ...
	I0926 17:54:49.473016    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 17:54:49.489783    4178 logs.go:276] 2 containers: [7aabe646a9fe 3fae3e334c04]
	I0926 17:54:49.489876    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 17:54:49.503069    4178 logs.go:276] 2 containers: [f525de12dc97 bcf266ec7564]
	I0926 17:54:49.503157    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 17:54:49.514340    4178 logs.go:276] 0 containers: []
	W0926 17:54:49.514353    4178 logs.go:278] No container was found matching "coredns"
	I0926 17:54:49.514430    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 17:54:49.528690    4178 logs.go:276] 2 containers: [d9fc15fd82f1 d8ed18933791]
	I0926 17:54:49.528782    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 17:54:49.540774    4178 logs.go:276] 2 containers: [b52376c9cdcd a5a9a7e18064]
	I0926 17:54:49.540870    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 17:54:49.553605    4178 logs.go:276] 2 containers: [350a07550ad1 f82271bde6b0]
	I0926 17:54:49.553693    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 17:54:49.566939    4178 logs.go:276] 2 containers: [b73a2fda347d 981336ad66c6]
	I0926 17:54:49.566961    4178 logs.go:123] Gathering logs for kindnet [981336ad66c6] ...
	I0926 17:54:49.566967    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 981336ad66c6"
	I0926 17:54:49.584163    4178 logs.go:123] Gathering logs for etcd [bcf266ec7564] ...
	I0926 17:54:49.584179    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcf266ec7564"
	I0926 17:54:49.608092    4178 logs.go:123] Gathering logs for kube-controller-manager [f82271bde6b0] ...
	I0926 17:54:49.608107    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f82271bde6b0"
	I0926 17:54:49.640526    4178 logs.go:123] Gathering logs for etcd [f525de12dc97] ...
	I0926 17:54:49.640542    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f525de12dc97"
	I0926 17:54:49.707920    4178 logs.go:123] Gathering logs for kube-scheduler [d9fc15fd82f1] ...
	I0926 17:54:49.707937    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9fc15fd82f1"
	I0926 17:54:49.725537    4178 logs.go:123] Gathering logs for kube-proxy [b52376c9cdcd] ...
	I0926 17:54:49.725551    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b52376c9cdcd"
	I0926 17:54:49.747118    4178 logs.go:123] Gathering logs for kube-proxy [a5a9a7e18064] ...
	I0926 17:54:49.747134    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5a9a7e18064"
	I0926 17:54:49.763059    4178 logs.go:123] Gathering logs for kindnet [b73a2fda347d] ...
	I0926 17:54:49.763073    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b73a2fda347d"
	I0926 17:54:49.780606    4178 logs.go:123] Gathering logs for container status ...
	I0926 17:54:49.780619    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 17:54:49.815474    4178 logs.go:123] Gathering logs for kubelet ...
	I0926 17:54:49.815490    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 17:54:49.856341    4178 logs.go:123] Gathering logs for kube-apiserver [3fae3e334c04] ...
	I0926 17:54:49.856359    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fae3e334c04"
	I0926 17:54:49.895001    4178 logs.go:123] Gathering logs for kube-apiserver [7aabe646a9fe] ...
	I0926 17:54:49.895016    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aabe646a9fe"
	I0926 17:54:49.915291    4178 logs.go:123] Gathering logs for kube-scheduler [d8ed18933791] ...
	I0926 17:54:49.915307    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ed18933791"
	I0926 17:54:49.931682    4178 logs.go:123] Gathering logs for kube-controller-manager [350a07550ad1] ...
	I0926 17:54:49.931698    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 350a07550ad1"
	I0926 17:54:49.962905    4178 logs.go:123] Gathering logs for Docker ...
	I0926 17:54:49.962920    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 17:54:49.995739    4178 logs.go:123] Gathering logs for dmesg ...
	I0926 17:54:49.995756    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 17:54:50.006748    4178 logs.go:123] Gathering logs for describe nodes ...
	I0926 17:54:50.006764    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 17:54:52.683223    4178 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0926 17:54:52.688111    4178 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0926 17:54:52.688148    4178 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0926 17:54:52.688152    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:52.688158    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:52.688162    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:52.688774    4178 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0926 17:54:52.688866    4178 api_server.go:141] control plane version: v1.31.1
	I0926 17:54:52.688877    4178 api_server.go:131] duration metric: took 3.215937625s to wait for apiserver health ...
	I0926 17:54:52.688882    4178 system_pods.go:43] waiting for kube-system pods to appear ...
	I0926 17:54:52.688964    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 17:54:52.702208    4178 logs.go:276] 2 containers: [7aabe646a9fe 3fae3e334c04]
	I0926 17:54:52.702296    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 17:54:52.716057    4178 logs.go:276] 2 containers: [f525de12dc97 bcf266ec7564]
	I0926 17:54:52.716146    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 17:54:52.730288    4178 logs.go:276] 0 containers: []
	W0926 17:54:52.730303    4178 logs.go:278] No container was found matching "coredns"
	I0926 17:54:52.730387    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 17:54:52.744133    4178 logs.go:276] 2 containers: [d9fc15fd82f1 d8ed18933791]
	I0926 17:54:52.744229    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 17:54:52.757357    4178 logs.go:276] 2 containers: [b52376c9cdcd a5a9a7e18064]
	I0926 17:54:52.757447    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 17:54:52.770397    4178 logs.go:276] 2 containers: [350a07550ad1 f82271bde6b0]
	I0926 17:54:52.770488    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 17:54:52.783588    4178 logs.go:276] 2 containers: [b73a2fda347d 981336ad66c6]
	I0926 17:54:52.783609    4178 logs.go:123] Gathering logs for dmesg ...
	I0926 17:54:52.783615    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 17:54:52.794149    4178 logs.go:123] Gathering logs for kube-scheduler [d9fc15fd82f1] ...
	I0926 17:54:52.794162    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9fc15fd82f1"
	I0926 17:54:52.810239    4178 logs.go:123] Gathering logs for kube-proxy [a5a9a7e18064] ...
	I0926 17:54:52.810253    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5a9a7e18064"
	I0926 17:54:52.828364    4178 logs.go:123] Gathering logs for kube-controller-manager [350a07550ad1] ...
	I0926 17:54:52.828379    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 350a07550ad1"
	I0926 17:54:52.859712    4178 logs.go:123] Gathering logs for kindnet [b73a2fda347d] ...
	I0926 17:54:52.859726    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b73a2fda347d"
	I0926 17:54:52.877881    4178 logs.go:123] Gathering logs for container status ...
	I0926 17:54:52.877898    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 17:54:52.920788    4178 logs.go:123] Gathering logs for kube-scheduler [d8ed18933791] ...
	I0926 17:54:52.920802    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ed18933791"
	I0926 17:54:52.937686    4178 logs.go:123] Gathering logs for Docker ...
	I0926 17:54:52.937700    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 17:54:52.970435    4178 logs.go:123] Gathering logs for kubelet ...
	I0926 17:54:52.970449    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 17:54:53.015652    4178 logs.go:123] Gathering logs for describe nodes ...
	I0926 17:54:53.015669    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 17:54:53.184377    4178 logs.go:123] Gathering logs for etcd [f525de12dc97] ...
	I0926 17:54:53.184391    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f525de12dc97"
	I0926 17:54:53.249067    4178 logs.go:123] Gathering logs for etcd [bcf266ec7564] ...
	I0926 17:54:53.249083    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcf266ec7564"
	I0926 17:54:53.274003    4178 logs.go:123] Gathering logs for kube-controller-manager [f82271bde6b0] ...
	I0926 17:54:53.274019    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f82271bde6b0"
	I0926 17:54:53.300047    4178 logs.go:123] Gathering logs for kube-apiserver [7aabe646a9fe] ...
	I0926 17:54:53.300062    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aabe646a9fe"
	I0926 17:54:53.321481    4178 logs.go:123] Gathering logs for kube-apiserver [3fae3e334c04] ...
	I0926 17:54:53.321495    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fae3e334c04"
	I0926 17:54:53.356023    4178 logs.go:123] Gathering logs for kube-proxy [b52376c9cdcd] ...
	I0926 17:54:53.356038    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b52376c9cdcd"
	I0926 17:54:53.374219    4178 logs.go:123] Gathering logs for kindnet [981336ad66c6] ...
	I0926 17:54:53.374233    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 981336ad66c6"
	I0926 17:54:55.893460    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0926 17:54:55.893486    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:55.893529    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:55.893539    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:55.899854    4178 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0926 17:54:55.904904    4178 system_pods.go:59] 26 kube-system pods found
	I0926 17:54:55.904920    4178 system_pods.go:61] "coredns-7c65d6cfc9-44l9n" [8009053f-fc43-43ba-a44e-fd1d53e88617] Running
	I0926 17:54:55.904925    4178 system_pods.go:61] "coredns-7c65d6cfc9-7jwgv" [20fb38a0-b993-41b3-9c91-98802d75ef47] Running
	I0926 17:54:55.904928    4178 system_pods.go:61] "etcd-ha-476000" [469cfe32-fa29-4816-afd7-3f115a3a6c67] Running
	I0926 17:54:55.904930    4178 system_pods.go:61] "etcd-ha-476000-m02" [d4c3ec0c-a151-4047-9a30-93cae10d24a0] Running
	I0926 17:54:55.904933    4178 system_pods.go:61] "etcd-ha-476000-m03" [e9166a94-af57-49ec-91a3-dc0c36083ca4] Running
	I0926 17:54:55.904936    4178 system_pods.go:61] "kindnet-44vxl" [488a3806-d7c1-4397-bff8-00d9ea3cb984] Running
	I0926 17:54:55.904938    4178 system_pods.go:61] "kindnet-4pnxr" [23b10d5b-fd63-4284-9c44-413b8bf80354] Running
	I0926 17:54:55.904941    4178 system_pods.go:61] "kindnet-hhrtc" [7ac457e3-ae85-4314-99dc-da37f5875807] Running
	I0926 17:54:55.904943    4178 system_pods.go:61] "kindnet-lgj66" [63fc5d71-c403-40d7-85a7-6ff48e307a79] Running
	I0926 17:54:55.904946    4178 system_pods.go:61] "kube-apiserver-ha-476000" [5c5e9fb4-0cda-4892-bae5-e51e839d2573] Running
	I0926 17:54:55.904948    4178 system_pods.go:61] "kube-apiserver-ha-476000-m02" [2d6d0355-152b-4be4-b810-f1c80c9ef24f] Running
	I0926 17:54:55.904951    4178 system_pods.go:61] "kube-apiserver-ha-476000-m03" [3ce79eb3-635c-4632-82e0-7240a7d49c50] Running
	I0926 17:54:55.904954    4178 system_pods.go:61] "kube-controller-manager-ha-476000" [9edef370-886e-4260-a3d3-99be564d90c6] Running
	I0926 17:54:55.904957    4178 system_pods.go:61] "kube-controller-manager-ha-476000-m02" [5eeed951-eaba-436f-9f80-8f68cb50425d] Running
	I0926 17:54:55.904960    4178 system_pods.go:61] "kube-controller-manager-ha-476000-m03" [e60f68a5-a353-4501-81c1-ac7d76820373] Running
	I0926 17:54:55.904962    4178 system_pods.go:61] "kube-proxy-5d8nb" [1e9c0178-2d58-4ca1-9fbe-c8e54d91bf1a] Running
	I0926 17:54:55.904965    4178 system_pods.go:61] "kube-proxy-bpsqv" [c264b112-c037-4332-9c47-2561ecb928f1] Running
	I0926 17:54:55.904967    4178 system_pods.go:61] "kube-proxy-ctdh4" [9a21a19a-5cad-4eb9-97f3-4c654fbbe59b] Running
	I0926 17:54:55.904970    4178 system_pods.go:61] "kube-proxy-nrsx7" [14b031d0-b044-4500-8cc1-1397c83c1886] Running
	I0926 17:54:55.904973    4178 system_pods.go:61] "kube-scheduler-ha-476000" [ef0e308c-1b28-413c-846e-24935489434d] Running
	I0926 17:54:55.904976    4178 system_pods.go:61] "kube-scheduler-ha-476000-m02" [2b75a7bc-a19c-4aeb-8521-10bd963cacbb] Running
	I0926 17:54:55.904978    4178 system_pods.go:61] "kube-scheduler-ha-476000-m03" [206cbf3f-bff1-4164-a622-3be7593c08ac] Running
	I0926 17:54:55.904981    4178 system_pods.go:61] "kube-vip-ha-476000" [376a9128-fe3c-41aa-b26c-da921ae20e68] Running
	I0926 17:54:55.904997    4178 system_pods.go:61] "kube-vip-ha-476000-m02" [7e907e74-7f15-4c04-b463-682a14766e66] Running
	I0926 17:54:55.905002    4178 system_pods.go:61] "kube-vip-ha-476000-m03" [bf2e3b8c-cf42-45b0-a4d7-a32b820bab21] Running
	I0926 17:54:55.905005    4178 system_pods.go:61] "storage-provisioner" [e3e367a7-6cda-4177-a81d-7897333308d7] Running
	I0926 17:54:55.905009    4178 system_pods.go:74] duration metric: took 3.216111125s to wait for pod list to return data ...
	I0926 17:54:55.905015    4178 default_sa.go:34] waiting for default service account to be created ...
	I0926 17:54:55.905062    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0926 17:54:55.905068    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:55.905073    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:55.905077    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:55.907842    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:55.908016    4178 default_sa.go:45] found service account: "default"
	I0926 17:54:55.908026    4178 default_sa.go:55] duration metric: took 3.006211ms for default service account to be created ...
	I0926 17:54:55.908031    4178 system_pods.go:116] waiting for k8s-apps to be running ...
	I0926 17:54:55.908061    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0926 17:54:55.908066    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:55.908071    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:55.908075    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:55.912026    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:55.917054    4178 system_pods.go:86] 26 kube-system pods found
	I0926 17:54:55.917066    4178 system_pods.go:89] "coredns-7c65d6cfc9-44l9n" [8009053f-fc43-43ba-a44e-fd1d53e88617] Running
	I0926 17:54:55.917070    4178 system_pods.go:89] "coredns-7c65d6cfc9-7jwgv" [20fb38a0-b993-41b3-9c91-98802d75ef47] Running
	I0926 17:54:55.917073    4178 system_pods.go:89] "etcd-ha-476000" [469cfe32-fa29-4816-afd7-3f115a3a6c67] Running
	I0926 17:54:55.917076    4178 system_pods.go:89] "etcd-ha-476000-m02" [d4c3ec0c-a151-4047-9a30-93cae10d24a0] Running
	I0926 17:54:55.917080    4178 system_pods.go:89] "etcd-ha-476000-m03" [e9166a94-af57-49ec-91a3-dc0c36083ca4] Running
	I0926 17:54:55.917083    4178 system_pods.go:89] "kindnet-44vxl" [488a3806-d7c1-4397-bff8-00d9ea3cb984] Running
	I0926 17:54:55.917085    4178 system_pods.go:89] "kindnet-4pnxr" [23b10d5b-fd63-4284-9c44-413b8bf80354] Running
	I0926 17:54:55.917088    4178 system_pods.go:89] "kindnet-hhrtc" [7ac457e3-ae85-4314-99dc-da37f5875807] Running
	I0926 17:54:55.917091    4178 system_pods.go:89] "kindnet-lgj66" [63fc5d71-c403-40d7-85a7-6ff48e307a79] Running
	I0926 17:54:55.917094    4178 system_pods.go:89] "kube-apiserver-ha-476000" [5c5e9fb4-0cda-4892-bae5-e51e839d2573] Running
	I0926 17:54:55.917097    4178 system_pods.go:89] "kube-apiserver-ha-476000-m02" [2d6d0355-152b-4be4-b810-f1c80c9ef24f] Running
	I0926 17:54:55.917100    4178 system_pods.go:89] "kube-apiserver-ha-476000-m03" [3ce79eb3-635c-4632-82e0-7240a7d49c50] Running
	I0926 17:54:55.917103    4178 system_pods.go:89] "kube-controller-manager-ha-476000" [9edef370-886e-4260-a3d3-99be564d90c6] Running
	I0926 17:54:55.917106    4178 system_pods.go:89] "kube-controller-manager-ha-476000-m02" [5eeed951-eaba-436f-9f80-8f68cb50425d] Running
	I0926 17:54:55.917110    4178 system_pods.go:89] "kube-controller-manager-ha-476000-m03" [e60f68a5-a353-4501-81c1-ac7d76820373] Running
	I0926 17:54:55.917113    4178 system_pods.go:89] "kube-proxy-5d8nb" [1e9c0178-2d58-4ca1-9fbe-c8e54d91bf1a] Running
	I0926 17:54:55.917116    4178 system_pods.go:89] "kube-proxy-bpsqv" [c264b112-c037-4332-9c47-2561ecb928f1] Running
	I0926 17:54:55.917123    4178 system_pods.go:89] "kube-proxy-ctdh4" [9a21a19a-5cad-4eb9-97f3-4c654fbbe59b] Running
	I0926 17:54:55.917126    4178 system_pods.go:89] "kube-proxy-nrsx7" [14b031d0-b044-4500-8cc1-1397c83c1886] Running
	I0926 17:54:55.917129    4178 system_pods.go:89] "kube-scheduler-ha-476000" [ef0e308c-1b28-413c-846e-24935489434d] Running
	I0926 17:54:55.917132    4178 system_pods.go:89] "kube-scheduler-ha-476000-m02" [2b75a7bc-a19c-4aeb-8521-10bd963cacbb] Running
	I0926 17:54:55.917135    4178 system_pods.go:89] "kube-scheduler-ha-476000-m03" [206cbf3f-bff1-4164-a622-3be7593c08ac] Running
	I0926 17:54:55.917138    4178 system_pods.go:89] "kube-vip-ha-476000" [376a9128-fe3c-41aa-b26c-da921ae20e68] Running
	I0926 17:54:55.917140    4178 system_pods.go:89] "kube-vip-ha-476000-m02" [7e907e74-7f15-4c04-b463-682a14766e66] Running
	I0926 17:54:55.917144    4178 system_pods.go:89] "kube-vip-ha-476000-m03" [bf2e3b8c-cf42-45b0-a4d7-a32b820bab21] Running
	I0926 17:54:55.917146    4178 system_pods.go:89] "storage-provisioner" [e3e367a7-6cda-4177-a81d-7897333308d7] Running
	I0926 17:54:55.917151    4178 system_pods.go:126] duration metric: took 9.116472ms to wait for k8s-apps to be running ...
	I0926 17:54:55.917160    4178 system_svc.go:44] waiting for kubelet service to be running ....
	I0926 17:54:55.917225    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 17:54:55.928854    4178 system_svc.go:56] duration metric: took 11.69353ms WaitForService to wait for kubelet
	I0926 17:54:55.928867    4178 kubeadm.go:582] duration metric: took 1m15.795183486s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 17:54:55.928878    4178 node_conditions.go:102] verifying NodePressure condition ...
	I0926 17:54:55.928918    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0926 17:54:55.928924    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:55.928930    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:55.928933    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:55.932146    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:55.933143    4178 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0926 17:54:55.933159    4178 node_conditions.go:123] node cpu capacity is 2
	I0926 17:54:55.933173    4178 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0926 17:54:55.933176    4178 node_conditions.go:123] node cpu capacity is 2
	I0926 17:54:55.933181    4178 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0926 17:54:55.933183    4178 node_conditions.go:123] node cpu capacity is 2
	I0926 17:54:55.933186    4178 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0926 17:54:55.933190    4178 node_conditions.go:123] node cpu capacity is 2
	I0926 17:54:55.933193    4178 node_conditions.go:105] duration metric: took 4.311525ms to run NodePressure ...
	I0926 17:54:55.933202    4178 start.go:241] waiting for startup goroutines ...
	I0926 17:54:55.933219    4178 start.go:255] writing updated cluster config ...
	I0926 17:54:55.954947    4178 out.go:201] 
	I0926 17:54:55.975717    4178 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:54:55.975787    4178 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/config.json ...
	I0926 17:54:55.997338    4178 out.go:177] * Starting "ha-476000-m03" control-plane node in "ha-476000" cluster
	I0926 17:54:56.055744    4178 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 17:54:56.055778    4178 cache.go:56] Caching tarball of preloaded images
	I0926 17:54:56.056007    4178 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0926 17:54:56.056029    4178 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 17:54:56.056173    4178 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/config.json ...
	I0926 17:54:56.057121    4178 start.go:360] acquireMachinesLock for ha-476000-m03: {Name:mk62b0ec9b6170d1971239573141a625c4a2621e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 17:54:56.057290    4178 start.go:364] duration metric: took 139.967µs to acquireMachinesLock for "ha-476000-m03"
	I0926 17:54:56.057321    4178 start.go:96] Skipping create...Using existing machine configuration
	I0926 17:54:56.057331    4178 fix.go:54] fixHost starting: m03
	I0926 17:54:56.057738    4178 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:54:56.057766    4178 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:54:56.066973    4178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52106
	I0926 17:54:56.067348    4178 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:54:56.067691    4178 main.go:141] libmachine: Using API Version  1
	I0926 17:54:56.067705    4178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:54:56.067918    4178 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:54:56.068036    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	I0926 17:54:56.068122    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetState
	I0926 17:54:56.068201    4178 main.go:141] libmachine: (ha-476000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:54:56.068289    4178 main.go:141] libmachine: (ha-476000-m03) DBG | hyperkit pid from json: 3537
	I0926 17:54:56.069219    4178 main.go:141] libmachine: (ha-476000-m03) DBG | hyperkit pid 3537 missing from process table
	I0926 17:54:56.069237    4178 fix.go:112] recreateIfNeeded on ha-476000-m03: state=Stopped err=<nil>
	I0926 17:54:56.069245    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	W0926 17:54:56.069331    4178 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 17:54:56.090482    4178 out.go:177] * Restarting existing hyperkit VM for "ha-476000-m03" ...
	I0926 17:54:56.132629    4178 main.go:141] libmachine: (ha-476000-m03) Calling .Start
	I0926 17:54:56.132887    4178 main.go:141] libmachine: (ha-476000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:54:56.132957    4178 main.go:141] libmachine: (ha-476000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/hyperkit.pid
	I0926 17:54:56.134746    4178 main.go:141] libmachine: (ha-476000-m03) DBG | hyperkit pid 3537 missing from process table
	I0926 17:54:56.134764    4178 main.go:141] libmachine: (ha-476000-m03) DBG | pid 3537 is in state "Stopped"
	I0926 17:54:56.134782    4178 main.go:141] libmachine: (ha-476000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/hyperkit.pid...
	I0926 17:54:56.135225    4178 main.go:141] libmachine: (ha-476000-m03) DBG | Using UUID 91a51069-a363-4c64-acd8-a07fa14dbb0d
	I0926 17:54:56.162007    4178 main.go:141] libmachine: (ha-476000-m03) DBG | Generated MAC 66:6f:5a:2d:e2:16
	I0926 17:54:56.162027    4178 main.go:141] libmachine: (ha-476000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000
	I0926 17:54:56.162143    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"91a51069-a363-4c64-acd8-a07fa14dbb0d", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003accc0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0926 17:54:56.162181    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"91a51069-a363-4c64-acd8-a07fa14dbb0d", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003accc0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0926 17:54:56.162253    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "91a51069-a363-4c64-acd8-a07fa14dbb0d", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/ha-476000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/bzimage,/Users/jenkins/minikube-integration/19711-1128/.minikube/machine
s/ha-476000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000"}
	I0926 17:54:56.162300    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 91a51069-a363-4c64-acd8-a07fa14dbb0d -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/ha-476000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/bzimage,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000"
	I0926 17:54:56.162312    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0926 17:54:56.163637    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 DEBUG: hyperkit: Pid is 4226
	I0926 17:54:56.164043    4178 main.go:141] libmachine: (ha-476000-m03) DBG | Attempt 0
	I0926 17:54:56.164071    4178 main.go:141] libmachine: (ha-476000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:54:56.164140    4178 main.go:141] libmachine: (ha-476000-m03) DBG | hyperkit pid from json: 4226
	I0926 17:54:56.166126    4178 main.go:141] libmachine: (ha-476000-m03) DBG | Searching for 66:6f:5a:2d:e2:16 in /var/db/dhcpd_leases ...
	I0926 17:54:56.166206    4178 main.go:141] libmachine: (ha-476000-m03) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0926 17:54:56.166235    4178 main.go:141] libmachine: (ha-476000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 17:54:56.166254    4178 main.go:141] libmachine: (ha-476000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 17:54:56.166288    4178 main.go:141] libmachine: (ha-476000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 17:54:56.166308    4178 main.go:141] libmachine: (ha-476000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f7515c}
	I0926 17:54:56.166318    4178 main.go:141] libmachine: (ha-476000-m03) DBG | Found match: 66:6f:5a:2d:e2:16
	I0926 17:54:56.166327    4178 main.go:141] libmachine: (ha-476000-m03) DBG | IP: 192.169.0.7
	I0926 17:54:56.166332    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetConfigRaw
	I0926 17:54:56.166976    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetIP
	I0926 17:54:56.167202    4178 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/config.json ...
	I0926 17:54:56.167675    4178 machine.go:93] provisionDockerMachine start ...
	I0926 17:54:56.167686    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	I0926 17:54:56.167814    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:54:56.167961    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:54:56.168088    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:54:56.168207    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:54:56.168321    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:54:56.168450    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:54:56.168613    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0926 17:54:56.168622    4178 main.go:141] libmachine: About to run SSH command:
	hostname
	I0926 17:54:56.172038    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I0926 17:54:56.180188    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0926 17:54:56.181229    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 17:54:56.181258    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 17:54:56.181274    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 17:54:56.181290    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 17:54:56.563523    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0926 17:54:56.563541    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0926 17:54:56.678338    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 17:54:56.678355    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 17:54:56.678363    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 17:54:56.678373    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 17:54:56.679203    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0926 17:54:56.679212    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0926 17:55:02.300815    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:55:02 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0926 17:55:02.300833    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:55:02 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0926 17:55:02.300855    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:55:02 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0926 17:55:02.325228    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:55:02 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0926 17:55:31.235618    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0926 17:55:31.235633    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetMachineName
	I0926 17:55:31.235773    4178 buildroot.go:166] provisioning hostname "ha-476000-m03"
	I0926 17:55:31.235783    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetMachineName
	I0926 17:55:31.235886    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:31.235992    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:31.236097    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.236189    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.236274    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:31.236414    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:55:31.236550    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0926 17:55:31.236559    4178 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-476000-m03 && echo "ha-476000-m03" | sudo tee /etc/hostname
	I0926 17:55:31.305642    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-476000-m03
	
	I0926 17:55:31.305657    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:31.305790    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:31.305908    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.306006    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.306089    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:31.306235    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:55:31.306383    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0926 17:55:31.306394    4178 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-476000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-476000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-476000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0926 17:55:31.369873    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 17:55:31.369889    4178 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19711-1128/.minikube CaCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19711-1128/.minikube}
	I0926 17:55:31.369903    4178 buildroot.go:174] setting up certificates
	I0926 17:55:31.369909    4178 provision.go:84] configureAuth start
	I0926 17:55:31.369916    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetMachineName
	I0926 17:55:31.370048    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetIP
	I0926 17:55:31.370147    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:31.370234    4178 provision.go:143] copyHostCerts
	I0926 17:55:31.370268    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem
	I0926 17:55:31.370317    4178 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem, removing ...
	I0926 17:55:31.370322    4178 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem
	I0926 17:55:31.370451    4178 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem (1123 bytes)
	I0926 17:55:31.370647    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem
	I0926 17:55:31.370676    4178 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem, removing ...
	I0926 17:55:31.370680    4178 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem
	I0926 17:55:31.370748    4178 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem (1675 bytes)
	I0926 17:55:31.370903    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem
	I0926 17:55:31.370932    4178 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem, removing ...
	I0926 17:55:31.370937    4178 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem
	I0926 17:55:31.371006    4178 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem (1082 bytes)
	I0926 17:55:31.371150    4178 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem org=jenkins.ha-476000-m03 san=[127.0.0.1 192.169.0.7 ha-476000-m03 localhost minikube]
	I0926 17:55:31.544988    4178 provision.go:177] copyRemoteCerts
	I0926 17:55:31.545045    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0926 17:55:31.545059    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:31.545196    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:31.545298    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.545402    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:31.545491    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/id_rsa Username:docker}
	I0926 17:55:31.580851    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0926 17:55:31.580928    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0926 17:55:31.601357    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0926 17:55:31.601440    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0926 17:55:31.621840    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0926 17:55:31.621921    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0926 17:55:31.641722    4178 provision.go:87] duration metric: took 271.803372ms to configureAuth
	I0926 17:55:31.641736    4178 buildroot.go:189] setting minikube options for container-runtime
	I0926 17:55:31.641909    4178 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:55:31.641923    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	I0926 17:55:31.642055    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:31.642148    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:31.642236    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.642329    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.642416    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:31.642531    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:55:31.642652    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0926 17:55:31.642659    4178 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0926 17:55:31.699187    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0926 17:55:31.699200    4178 buildroot.go:70] root file system type: tmpfs
	I0926 17:55:31.699283    4178 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0926 17:55:31.699296    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:31.699424    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:31.699525    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.699630    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.699725    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:31.699863    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:55:31.700007    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0926 17:55:31.700056    4178 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0926 17:55:31.769790    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0926 17:55:31.769808    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:31.769942    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:31.770041    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.770127    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.770216    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:31.770341    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:55:31.770484    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0926 17:55:31.770496    4178 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0926 17:55:33.400017    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0926 17:55:33.400032    4178 machine.go:96] duration metric: took 37.232210795s to provisionDockerMachine
	I0926 17:55:33.400040    4178 start.go:293] postStartSetup for "ha-476000-m03" (driver="hyperkit")
	I0926 17:55:33.400054    4178 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0926 17:55:33.400067    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	I0926 17:55:33.400257    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0926 17:55:33.400271    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:33.400365    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:33.400451    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:33.400540    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:33.400615    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/id_rsa Username:docker}
	I0926 17:55:33.437533    4178 ssh_runner.go:195] Run: cat /etc/os-release
	I0926 17:55:33.440663    4178 info.go:137] Remote host: Buildroot 2023.02.9
	I0926 17:55:33.440673    4178 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1128/.minikube/addons for local assets ...
	I0926 17:55:33.440763    4178 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1128/.minikube/files for local assets ...
	I0926 17:55:33.440901    4178 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> 16792.pem in /etc/ssl/certs
	I0926 17:55:33.440910    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> /etc/ssl/certs/16792.pem
	I0926 17:55:33.441066    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0926 17:55:33.449179    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem --> /etc/ssl/certs/16792.pem (1708 bytes)
	I0926 17:55:33.469328    4178 start.go:296] duration metric: took 69.278399ms for postStartSetup
	I0926 17:55:33.469350    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	I0926 17:55:33.469543    4178 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0926 17:55:33.469556    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:33.469645    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:33.469723    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:33.469812    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:33.469885    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/id_rsa Username:docker}
	I0926 17:55:33.505216    4178 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0926 17:55:33.505294    4178 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0926 17:55:33.540120    4178 fix.go:56] duration metric: took 37.482649135s for fixHost
	I0926 17:55:33.540150    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:33.540287    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:33.540382    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:33.540461    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:33.540540    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:33.540677    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:55:33.540816    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0926 17:55:33.540823    4178 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0926 17:55:33.598810    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727398533.714160628
	
	I0926 17:55:33.598825    4178 fix.go:216] guest clock: 1727398533.714160628
	I0926 17:55:33.598831    4178 fix.go:229] Guest: 2024-09-26 17:55:33.714160628 -0700 PDT Remote: 2024-09-26 17:55:33.540136 -0700 PDT m=+153.107512249 (delta=174.024628ms)
	I0926 17:55:33.598841    4178 fix.go:200] guest clock delta is within tolerance: 174.024628ms
	I0926 17:55:33.598846    4178 start.go:83] releasing machines lock for "ha-476000-m03", held for 37.541403544s
	I0926 17:55:33.598861    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	I0926 17:55:33.598984    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetIP
	I0926 17:55:33.620720    4178 out.go:177] * Found network options:
	I0926 17:55:33.640782    4178 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0926 17:55:33.662722    4178 proxy.go:119] fail to check proxy env: Error ip not in block
	W0926 17:55:33.662755    4178 proxy.go:119] fail to check proxy env: Error ip not in block
	I0926 17:55:33.662789    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	I0926 17:55:33.663752    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	I0926 17:55:33.664030    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	I0926 17:55:33.664220    4178 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0926 17:55:33.664265    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	W0926 17:55:33.664303    4178 proxy.go:119] fail to check proxy env: Error ip not in block
	W0926 17:55:33.664331    4178 proxy.go:119] fail to check proxy env: Error ip not in block
	I0926 17:55:33.664429    4178 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0926 17:55:33.664449    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:33.664488    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:33.664703    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:33.664719    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:33.664903    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:33.664932    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:33.665066    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/id_rsa Username:docker}
	I0926 17:55:33.665091    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:33.665207    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/id_rsa Username:docker}
	W0926 17:55:33.697895    4178 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0926 17:55:33.697966    4178 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0926 17:55:33.748934    4178 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0926 17:55:33.748959    4178 start.go:495] detecting cgroup driver to use...
	I0926 17:55:33.749065    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 17:55:33.765581    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0926 17:55:33.775502    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0926 17:55:33.785025    4178 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0926 17:55:33.785083    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0926 17:55:33.794919    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 17:55:33.804605    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0926 17:55:33.814324    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 17:55:33.824237    4178 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0926 17:55:33.832956    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0926 17:55:33.841773    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0926 17:55:33.851179    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0926 17:55:33.860818    4178 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0926 17:55:33.869929    4178 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0926 17:55:33.870002    4178 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0926 17:55:33.880612    4178 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0926 17:55:33.888804    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:55:33.989453    4178 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0926 17:55:34.008589    4178 start.go:495] detecting cgroup driver to use...
	I0926 17:55:34.008666    4178 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0926 17:55:34.033408    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 17:55:34.045976    4178 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0926 17:55:34.061768    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 17:55:34.072236    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 17:55:34.082936    4178 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0926 17:55:34.101453    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 17:55:34.111855    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 17:55:34.126151    4178 ssh_runner.go:195] Run: which cri-dockerd
	I0926 17:55:34.129207    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0926 17:55:34.136448    4178 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0926 17:55:34.149966    4178 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0926 17:55:34.247760    4178 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0926 17:55:34.364359    4178 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0926 17:55:34.364382    4178 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0926 17:55:34.380269    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:55:34.475811    4178 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0926 17:56:35.519197    4178 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.04314195s)
	I0926 17:56:35.519276    4178 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0926 17:56:35.552893    4178 out.go:201] 
	W0926 17:56:35.574257    4178 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 27 00:55:31 ha-476000-m03 systemd[1]: Starting Docker Application Container Engine...
	Sep 27 00:55:31 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:31.500016553Z" level=info msg="Starting up"
	Sep 27 00:55:31 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:31.500635723Z" level=info msg="containerd not running, starting managed containerd"
	Sep 27 00:55:31 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:31.501585462Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=510
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.515859502Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.530811327Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.530896497Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.530963742Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.530999016Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.531160593Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.531211393Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.531353040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.531394128Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.531431029Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.531461249Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.531611451Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.531854923Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.533401951Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.533446517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.533570107Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.533614884Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.533785548Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.533833312Z" level=info msg="metadata content store policy set" policy=shared
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537372044Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537425387Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537458961Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537519539Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537555242Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537622818Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537842730Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537922428Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537957588Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537987448Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538017362Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538049217Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538078685Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538107984Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538137843Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538167077Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538198997Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538230397Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538266484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538296944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538326105Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538358875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538390741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538420029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538458203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538495889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538528790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538561681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538590379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538618723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538647795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538678724Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538713636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538743343Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538771404Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538879453Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538923135Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538973990Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.539015313Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.539070453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.539103724Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.539133731Z" level=info msg="NRI interface is disabled by configuration."
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.539314481Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.539398768Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.539457208Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.539540620Z" level=info msg="containerd successfully booted in 0.024310s"
	Sep 27 00:55:32 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:32.523809928Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 27 00:55:32 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:32.557923590Z" level=info msg="Loading containers: start."
	Sep 27 00:55:32 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:32.687864975Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 27 00:55:32 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:32.754261548Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 27 00:55:33 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:33.488464069Z" level=info msg="Loading containers: done."
	Sep 27 00:55:33 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:33.495297411Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Sep 27 00:55:33 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:33.495333206Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Sep 27 00:55:33 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:33.495348892Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Sep 27 00:55:33 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:33.495450205Z" level=info msg="Daemon has completed initialization"
	Sep 27 00:55:33 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:33.514076327Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 27 00:55:33 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:33.514159018Z" level=info msg="API listen on [::]:2376"
	Sep 27 00:55:33 ha-476000-m03 systemd[1]: Started Docker Application Container Engine.
	Sep 27 00:55:34 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:34.603579868Z" level=info msg="Processing signal 'terminated'"
	Sep 27 00:55:34 ha-476000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Sep 27 00:55:34 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:34.604826953Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 27 00:55:34 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:34.605154827Z" level=info msg="Daemon shutdown complete"
	Sep 27 00:55:34 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:34.605194895Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 27 00:55:34 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:34.605243671Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 27 00:55:35 ha-476000-m03 systemd[1]: docker.service: Deactivated successfully.
	Sep 27 00:55:35 ha-476000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Sep 27 00:55:35 ha-476000-m03 systemd[1]: Starting Docker Application Container Engine...
	Sep 27 00:55:35 ha-476000-m03 dockerd[1093]: time="2024-09-27T00:55:35.644572631Z" level=info msg="Starting up"
	Sep 27 00:56:35 ha-476000-m03 dockerd[1093]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 27 00:56:35 ha-476000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 27 00:56:35 ha-476000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 27 00:56:35 ha-476000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 27 00:55:31 ha-476000-m03 systemd[1]: Starting Docker Application Container Engine...
	Sep 27 00:55:31 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:31.500016553Z" level=info msg="Starting up"
	Sep 27 00:55:31 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:31.500635723Z" level=info msg="containerd not running, starting managed containerd"
	Sep 27 00:55:31 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:31.501585462Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=510
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.515859502Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.530811327Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.530896497Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.530963742Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.530999016Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.531160593Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.531211393Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.531353040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.531394128Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.531431029Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.531461249Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.531611451Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.531854923Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.533401951Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.533446517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.533570107Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.533614884Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.533785548Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.533833312Z" level=info msg="metadata content store policy set" policy=shared
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537372044Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537425387Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537458961Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537519539Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537555242Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537622818Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537842730Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537922428Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537957588Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537987448Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538017362Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538049217Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538078685Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538107984Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538137843Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538167077Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538198997Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538230397Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538266484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538296944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538326105Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538358875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538390741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538420029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538458203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538495889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538528790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538561681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538590379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538618723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538647795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538678724Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538713636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538743343Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538771404Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538879453Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538923135Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538973990Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.539015313Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.539070453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.539103724Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.539133731Z" level=info msg="NRI interface is disabled by configuration."
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.539314481Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.539398768Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.539457208Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.539540620Z" level=info msg="containerd successfully booted in 0.024310s"
	Sep 27 00:55:32 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:32.523809928Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 27 00:55:32 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:32.557923590Z" level=info msg="Loading containers: start."
	Sep 27 00:55:32 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:32.687864975Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 27 00:55:32 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:32.754261548Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 27 00:55:33 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:33.488464069Z" level=info msg="Loading containers: done."
	Sep 27 00:55:33 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:33.495297411Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Sep 27 00:55:33 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:33.495333206Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Sep 27 00:55:33 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:33.495348892Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Sep 27 00:55:33 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:33.495450205Z" level=info msg="Daemon has completed initialization"
	Sep 27 00:55:33 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:33.514076327Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 27 00:55:33 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:33.514159018Z" level=info msg="API listen on [::]:2376"
	Sep 27 00:55:33 ha-476000-m03 systemd[1]: Started Docker Application Container Engine.
	Sep 27 00:55:34 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:34.603579868Z" level=info msg="Processing signal 'terminated'"
	Sep 27 00:55:34 ha-476000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Sep 27 00:55:34 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:34.604826953Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 27 00:55:34 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:34.605154827Z" level=info msg="Daemon shutdown complete"
	Sep 27 00:55:34 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:34.605194895Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 27 00:55:34 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:34.605243671Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 27 00:55:35 ha-476000-m03 systemd[1]: docker.service: Deactivated successfully.
	Sep 27 00:55:35 ha-476000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Sep 27 00:55:35 ha-476000-m03 systemd[1]: Starting Docker Application Container Engine...
	Sep 27 00:55:35 ha-476000-m03 dockerd[1093]: time="2024-09-27T00:55:35.644572631Z" level=info msg="Starting up"
	Sep 27 00:56:35 ha-476000-m03 dockerd[1093]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 27 00:56:35 ha-476000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 27 00:56:35 ha-476000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 27 00:56:35 ha-476000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0926 17:56:35.574334    4178 out.go:270] * 
	* 
	W0926 17:56:35.575462    4178 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 17:56:35.658842    4178 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-amd64 start -p ha-476000 --wait=true -v=7 --alsologtostderr --driver=hyperkit " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-476000 -n ha-476000
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-476000 logs -n 25: (3.272113319s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-476000 cp ha-476000-m03:/home/docker/cp-test.txt                                                                          | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | ha-476000-m04:/home/docker/cp-test_ha-476000-m03_ha-476000-m04.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-476000 ssh -n                                                                                                             | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | ha-476000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-476000 ssh -n ha-476000-m04 sudo cat                                                                                      | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | /home/docker/cp-test_ha-476000-m03_ha-476000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-476000 cp testdata/cp-test.txt                                                                                            | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | ha-476000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-476000 ssh -n                                                                                                             | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | ha-476000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-476000 cp ha-476000-m04:/home/docker/cp-test.txt                                                                          | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile3898402723/001/cp-test_ha-476000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-476000 ssh -n                                                                                                             | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | ha-476000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-476000 cp ha-476000-m04:/home/docker/cp-test.txt                                                                          | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | ha-476000:/home/docker/cp-test_ha-476000-m04_ha-476000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-476000 ssh -n                                                                                                             | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | ha-476000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-476000 ssh -n ha-476000 sudo cat                                                                                          | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | /home/docker/cp-test_ha-476000-m04_ha-476000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-476000 cp ha-476000-m04:/home/docker/cp-test.txt                                                                          | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | ha-476000-m02:/home/docker/cp-test_ha-476000-m04_ha-476000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-476000 ssh -n                                                                                                             | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | ha-476000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-476000 ssh -n ha-476000-m02 sudo cat                                                                                      | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | /home/docker/cp-test_ha-476000-m04_ha-476000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-476000 cp ha-476000-m04:/home/docker/cp-test.txt                                                                          | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | ha-476000-m03:/home/docker/cp-test_ha-476000-m04_ha-476000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-476000 ssh -n                                                                                                             | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | ha-476000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-476000 ssh -n ha-476000-m03 sudo cat                                                                                      | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | /home/docker/cp-test_ha-476000-m04_ha-476000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-476000 node stop m02 -v=7                                                                                                 | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-476000 node start m02 -v=7                                                                                                | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:47 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-476000 -v=7                                                                                                       | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:47 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-476000 -v=7                                                                                                            | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:47 PDT | 26 Sep 24 17:47 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-476000 --wait=true -v=7                                                                                                | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:47 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-476000                                                                                                            | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:49 PDT |                     |
	| node    | ha-476000 node delete m03 -v=7                                                                                               | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:49 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | ha-476000 stop -v=7                                                                                                          | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:49 PDT | 26 Sep 24 17:53 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-476000 --wait=true                                                                                                     | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:53 PDT |                     |
	|         | -v=7 --alsologtostderr                                                                                                       |           |         |         |                     |                     |
	|         | --driver=hyperkit                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/26 17:53:00
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 17:53:00.467998    4178 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:53:00.468247    4178 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:53:00.468252    4178 out.go:358] Setting ErrFile to fd 2...
	I0926 17:53:00.468256    4178 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:53:00.468436    4178 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1128/.minikube/bin
	I0926 17:53:00.469901    4178 out.go:352] Setting JSON to false
	I0926 17:53:00.492370    4178 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3150,"bootTime":1727395230,"procs":437,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0926 17:53:00.492530    4178 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 17:53:00.514400    4178 out.go:177] * [ha-476000] minikube v1.34.0 on Darwin 14.6.1
	I0926 17:53:00.557228    4178 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 17:53:00.557300    4178 notify.go:220] Checking for updates...
	I0926 17:53:00.599719    4178 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1128/kubeconfig
	I0926 17:53:00.621009    4178 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0926 17:53:00.642091    4178 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 17:53:00.662936    4178 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1128/.minikube
	I0926 17:53:00.684204    4178 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 17:53:00.705550    4178 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:53:00.706120    4178 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:53:00.706169    4178 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:53:00.715431    4178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52037
	I0926 17:53:00.715807    4178 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:53:00.716207    4178 main.go:141] libmachine: Using API Version  1
	I0926 17:53:00.716243    4178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:53:00.716493    4178 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:53:00.716626    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:00.716833    4178 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 17:53:00.717101    4178 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:53:00.717132    4178 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:53:00.725380    4178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52039
	I0926 17:53:00.725706    4178 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:53:00.726059    4178 main.go:141] libmachine: Using API Version  1
	I0926 17:53:00.726076    4178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:53:00.726325    4178 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:53:00.726449    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:00.754773    4178 out.go:177] * Using the hyperkit driver based on existing profile
	I0926 17:53:00.797071    4178 start.go:297] selected driver: hyperkit
	I0926 17:53:00.797101    4178 start.go:901] validating driver "hyperkit" against &{Name:ha-476000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:ha-476000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclas
s:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 17:53:00.797347    4178 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 17:53:00.797543    4178 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 17:53:00.797758    4178 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19711-1128/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0926 17:53:00.807380    4178 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0926 17:53:00.811121    4178 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:53:00.811145    4178 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0926 17:53:00.813743    4178 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 17:53:00.813780    4178 cni.go:84] Creating CNI manager for ""
	I0926 17:53:00.813817    4178 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0926 17:53:00.813892    4178 start.go:340] cluster config:
	{Name:ha-476000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-476000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inacce
l:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 17:53:00.814010    4178 iso.go:125] acquiring lock: {Name:mka8a9c5a237c1e4ae233281d2ff7965d13a843d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 17:53:00.856015    4178 out.go:177] * Starting "ha-476000" primary control-plane node in "ha-476000" cluster
	I0926 17:53:00.877127    4178 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 17:53:00.877240    4178 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0926 17:53:00.877263    4178 cache.go:56] Caching tarball of preloaded images
	I0926 17:53:00.877457    4178 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0926 17:53:00.877476    4178 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 17:53:00.877658    4178 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/config.json ...
	I0926 17:53:00.878610    4178 start.go:360] acquireMachinesLock for ha-476000: {Name:mk62b0ec9b6170d1971239573141a625c4a2621e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 17:53:00.878759    4178 start.go:364] duration metric: took 97.008µs to acquireMachinesLock for "ha-476000"
	I0926 17:53:00.878828    4178 start.go:96] Skipping create...Using existing machine configuration
	I0926 17:53:00.878843    4178 fix.go:54] fixHost starting: 
	I0926 17:53:00.879324    4178 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:53:00.879362    4178 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:53:00.888435    4178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52041
	I0926 17:53:00.888799    4178 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:53:00.889164    4178 main.go:141] libmachine: Using API Version  1
	I0926 17:53:00.889177    4178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:53:00.889396    4178 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:53:00.889518    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:00.889616    4178 main.go:141] libmachine: (ha-476000) Calling .GetState
	I0926 17:53:00.889695    4178 main.go:141] libmachine: (ha-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:53:00.889775    4178 main.go:141] libmachine: (ha-476000) DBG | hyperkit pid from json: 4068
	I0926 17:53:00.890689    4178 main.go:141] libmachine: (ha-476000) DBG | hyperkit pid 4068 missing from process table
	I0926 17:53:00.890720    4178 fix.go:112] recreateIfNeeded on ha-476000: state=Stopped err=<nil>
	I0926 17:53:00.890735    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	W0926 17:53:00.890819    4178 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 17:53:00.933253    4178 out.go:177] * Restarting existing hyperkit VM for "ha-476000" ...
	I0926 17:53:00.956221    4178 main.go:141] libmachine: (ha-476000) Calling .Start
	I0926 17:53:00.956482    4178 main.go:141] libmachine: (ha-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:53:00.956522    4178 main.go:141] libmachine: (ha-476000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/hyperkit.pid
	I0926 17:53:00.958313    4178 main.go:141] libmachine: (ha-476000) DBG | hyperkit pid 4068 missing from process table
	I0926 17:53:00.958323    4178 main.go:141] libmachine: (ha-476000) DBG | pid 4068 is in state "Stopped"
	I0926 17:53:00.958337    4178 main.go:141] libmachine: (ha-476000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/hyperkit.pid...
	I0926 17:53:00.958705    4178 main.go:141] libmachine: (ha-476000) DBG | Using UUID 99cfbb80-3e9d-4d4f-a72a-447af4e3b1db
	I0926 17:53:01.067490    4178 main.go:141] libmachine: (ha-476000) DBG | Generated MAC 96:a2:4a:f3:be:4a
	I0926 17:53:01.067521    4178 main.go:141] libmachine: (ha-476000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000
	I0926 17:53:01.067590    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"99cfbb80-3e9d-4d4f-a72a-447af4e3b1db", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bea80)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0926 17:53:01.067614    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"99cfbb80-3e9d-4d4f-a72a-447af4e3b1db", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bea80)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0926 17:53:01.067680    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "99cfbb80-3e9d-4d4f-a72a-447af4e3b1db", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/ha-476000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/bzimage,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000"}
	I0926 17:53:01.067717    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 99cfbb80-3e9d-4d4f-a72a-447af4e3b1db -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/ha-476000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/console-ring -f kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/bzimage,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000"
	I0926 17:53:01.067731    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0926 17:53:01.069340    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 DEBUG: hyperkit: Pid is 4191
	I0926 17:53:01.069679    4178 main.go:141] libmachine: (ha-476000) DBG | Attempt 0
	I0926 17:53:01.069693    4178 main.go:141] libmachine: (ha-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:53:01.069753    4178 main.go:141] libmachine: (ha-476000) DBG | hyperkit pid from json: 4191
	I0926 17:53:01.071639    4178 main.go:141] libmachine: (ha-476000) DBG | Searching for 96:a2:4a:f3:be:4a in /var/db/dhcpd_leases ...
	I0926 17:53:01.071694    4178 main.go:141] libmachine: (ha-476000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0926 17:53:01.071711    4178 main.go:141] libmachine: (ha-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f7523f}
	I0926 17:53:01.071719    4178 main.go:141] libmachine: (ha-476000) DBG | Found match: 96:a2:4a:f3:be:4a
	I0926 17:53:01.071724    4178 main.go:141] libmachine: (ha-476000) DBG | IP: 192.169.0.5
	I0926 17:53:01.071801    4178 main.go:141] libmachine: (ha-476000) Calling .GetConfigRaw
	I0926 17:53:01.072466    4178 main.go:141] libmachine: (ha-476000) Calling .GetIP
	I0926 17:53:01.072682    4178 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/config.json ...
	I0926 17:53:01.073265    4178 machine.go:93] provisionDockerMachine start ...
	I0926 17:53:01.073276    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:01.073432    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:01.073553    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:01.073654    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:01.073744    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:01.073824    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:01.073962    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:01.074151    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0926 17:53:01.074160    4178 main.go:141] libmachine: About to run SSH command:
	hostname
	I0926 17:53:01.077803    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I0926 17:53:01.131821    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0926 17:53:01.132498    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 17:53:01.132519    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 17:53:01.132527    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 17:53:01.132535    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 17:53:01.515934    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0926 17:53:01.515948    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0926 17:53:01.630853    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 17:53:01.630870    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 17:53:01.630880    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 17:53:01.630889    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 17:53:01.631762    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0926 17:53:01.631773    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0926 17:53:07.224844    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:07 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0926 17:53:07.224979    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:07 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0926 17:53:07.224989    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:07 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0926 17:53:07.249067    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:07 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0926 17:53:12.148094    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0926 17:53:12.148109    4178 main.go:141] libmachine: (ha-476000) Calling .GetMachineName
	I0926 17:53:12.148318    4178 buildroot.go:166] provisioning hostname "ha-476000"
	I0926 17:53:12.148328    4178 main.go:141] libmachine: (ha-476000) Calling .GetMachineName
	I0926 17:53:12.148430    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:12.148546    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:12.148649    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.148741    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.148844    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:12.148986    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:12.149192    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0926 17:53:12.149200    4178 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-476000 && echo "ha-476000" | sudo tee /etc/hostname
	I0926 17:53:12.225889    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-476000
	
	I0926 17:53:12.225907    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:12.226039    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:12.226125    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.226235    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.226313    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:12.226463    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:12.226601    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0926 17:53:12.226612    4178 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-476000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-476000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-476000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0926 17:53:12.298491    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 17:53:12.298512    4178 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19711-1128/.minikube CaCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19711-1128/.minikube}
	I0926 17:53:12.298531    4178 buildroot.go:174] setting up certificates
	I0926 17:53:12.298537    4178 provision.go:84] configureAuth start
	I0926 17:53:12.298544    4178 main.go:141] libmachine: (ha-476000) Calling .GetMachineName
	I0926 17:53:12.298672    4178 main.go:141] libmachine: (ha-476000) Calling .GetIP
	I0926 17:53:12.298777    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:12.298858    4178 provision.go:143] copyHostCerts
	I0926 17:53:12.298890    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem
	I0926 17:53:12.298959    4178 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem, removing ...
	I0926 17:53:12.298968    4178 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem
	I0926 17:53:12.299110    4178 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem (1082 bytes)
	I0926 17:53:12.299320    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem
	I0926 17:53:12.299359    4178 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem, removing ...
	I0926 17:53:12.299364    4178 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem
	I0926 17:53:12.299452    4178 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem (1123 bytes)
	I0926 17:53:12.299596    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem
	I0926 17:53:12.299633    4178 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem, removing ...
	I0926 17:53:12.299638    4178 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem
	I0926 17:53:12.299717    4178 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem (1675 bytes)
	I0926 17:53:12.299883    4178 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem org=jenkins.ha-476000 san=[127.0.0.1 192.169.0.5 ha-476000 localhost minikube]
	I0926 17:53:12.619231    4178 provision.go:177] copyRemoteCerts
	I0926 17:53:12.619306    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0926 17:53:12.619328    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:12.619499    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:12.619617    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.619721    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:12.619805    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/id_rsa Username:docker}
	I0926 17:53:12.659598    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0926 17:53:12.659672    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0926 17:53:12.679552    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0926 17:53:12.679620    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0926 17:53:12.699069    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0926 17:53:12.699141    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0926 17:53:12.718755    4178 provision.go:87] duration metric: took 420.20261ms to configureAuth
	I0926 17:53:12.718767    4178 buildroot.go:189] setting minikube options for container-runtime
	I0926 17:53:12.718921    4178 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:53:12.718934    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:12.719072    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:12.719167    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:12.719255    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.719341    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.719422    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:12.719544    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:12.719669    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0926 17:53:12.719676    4178 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0926 17:53:12.785771    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0926 17:53:12.785788    4178 buildroot.go:70] root file system type: tmpfs
	I0926 17:53:12.785872    4178 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0926 17:53:12.785886    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:12.786022    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:12.786110    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.786193    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.786273    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:12.786415    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:12.786558    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0926 17:53:12.786601    4178 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0926 17:53:12.862455    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0926 17:53:12.862477    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:12.862607    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:12.862705    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.862800    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.862882    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:12.863016    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:12.863156    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0926 17:53:12.863169    4178 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0926 17:53:14.510518    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0926 17:53:14.510534    4178 machine.go:96] duration metric: took 13.437211612s to provisionDockerMachine
	I0926 17:53:14.510545    4178 start.go:293] postStartSetup for "ha-476000" (driver="hyperkit")
	I0926 17:53:14.510553    4178 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0926 17:53:14.510563    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:14.510765    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0926 17:53:14.510780    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:14.510875    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:14.510981    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:14.511085    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:14.511186    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/id_rsa Username:docker}
	I0926 17:53:14.553095    4178 ssh_runner.go:195] Run: cat /etc/os-release
	I0926 17:53:14.556852    4178 info.go:137] Remote host: Buildroot 2023.02.9
	I0926 17:53:14.556867    4178 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1128/.minikube/addons for local assets ...
	I0926 17:53:14.556973    4178 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1128/.minikube/files for local assets ...
	I0926 17:53:14.557159    4178 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> 16792.pem in /etc/ssl/certs
	I0926 17:53:14.557167    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> /etc/ssl/certs/16792.pem
	I0926 17:53:14.557383    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0926 17:53:14.567060    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem --> /etc/ssl/certs/16792.pem (1708 bytes)
	I0926 17:53:14.600616    4178 start.go:296] duration metric: took 90.060103ms for postStartSetup
	I0926 17:53:14.600637    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:14.600819    4178 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0926 17:53:14.600832    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:14.600912    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:14.600992    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:14.601061    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:14.601150    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/id_rsa Username:docker}
	I0926 17:53:14.640650    4178 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0926 17:53:14.640716    4178 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0926 17:53:14.694957    4178 fix.go:56] duration metric: took 13.816065248s for fixHost
	I0926 17:53:14.694980    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:14.695115    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:14.695206    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:14.695301    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:14.695399    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:14.695527    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:14.695674    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0926 17:53:14.695682    4178 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0926 17:53:14.760098    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727398394.872717718
	
	I0926 17:53:14.760109    4178 fix.go:216] guest clock: 1727398394.872717718
	I0926 17:53:14.760115    4178 fix.go:229] Guest: 2024-09-26 17:53:14.872717718 -0700 PDT Remote: 2024-09-26 17:53:14.69497 -0700 PDT m=+14.262859348 (delta=177.747718ms)
	I0926 17:53:14.760134    4178 fix.go:200] guest clock delta is within tolerance: 177.747718ms
	I0926 17:53:14.760137    4178 start.go:83] releasing machines lock for "ha-476000", held for 13.881299475s
	I0926 17:53:14.760155    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:14.760297    4178 main.go:141] libmachine: (ha-476000) Calling .GetIP
	I0926 17:53:14.760395    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:14.760729    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:14.760850    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:14.760950    4178 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0926 17:53:14.760987    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:14.761013    4178 ssh_runner.go:195] Run: cat /version.json
	I0926 17:53:14.761025    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:14.761099    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:14.761116    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:14.761194    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:14.761205    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:14.761304    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:14.761313    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:14.761398    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/id_rsa Username:docker}
	I0926 17:53:14.761432    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/id_rsa Username:docker}
	I0926 17:53:14.795855    4178 ssh_runner.go:195] Run: systemctl --version
	I0926 17:53:14.843523    4178 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0926 17:53:14.848548    4178 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0926 17:53:14.848602    4178 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0926 17:53:14.862277    4178 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0926 17:53:14.862289    4178 start.go:495] detecting cgroup driver to use...
	I0926 17:53:14.862388    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 17:53:14.879332    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0926 17:53:14.888407    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0926 17:53:14.897249    4178 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0926 17:53:14.897300    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0926 17:53:14.906191    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 17:53:14.914943    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0926 17:53:14.923611    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 17:53:14.932390    4178 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0926 17:53:14.941382    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0926 17:53:14.950233    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0926 17:53:14.959047    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0926 17:53:14.967887    4178 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0926 17:53:14.975975    4178 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0926 17:53:14.976018    4178 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0926 17:53:14.985185    4178 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0926 17:53:14.993181    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:15.086628    4178 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0926 17:53:15.106310    4178 start.go:495] detecting cgroup driver to use...
	I0926 17:53:15.106396    4178 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0926 17:53:15.118546    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 17:53:15.129665    4178 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0926 17:53:15.143061    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 17:53:15.154154    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 17:53:15.164978    4178 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0926 17:53:15.188125    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 17:53:15.199509    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 17:53:15.214608    4178 ssh_runner.go:195] Run: which cri-dockerd
	I0926 17:53:15.217523    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0926 17:53:15.225391    4178 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0926 17:53:15.238858    4178 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0926 17:53:15.337444    4178 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0926 17:53:15.437802    4178 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0926 17:53:15.437879    4178 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0926 17:53:15.451733    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:15.563208    4178 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0926 17:53:17.891140    4178 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.327906141s)
	I0926 17:53:17.891209    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0926 17:53:17.902729    4178 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0926 17:53:17.915694    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0926 17:53:17.926164    4178 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0926 17:53:18.028587    4178 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0926 17:53:18.135687    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:18.246049    4178 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0926 17:53:18.259788    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0926 17:53:18.270995    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:18.379007    4178 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0926 17:53:18.442458    4178 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0926 17:53:18.442555    4178 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0926 17:53:18.447167    4178 start.go:563] Will wait 60s for crictl version
	I0926 17:53:18.447233    4178 ssh_runner.go:195] Run: which crictl
	I0926 17:53:18.450364    4178 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0926 17:53:18.474973    4178 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I0926 17:53:18.475082    4178 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0926 17:53:18.492744    4178 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0926 17:53:18.534852    4178 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I0926 17:53:18.534897    4178 main.go:141] libmachine: (ha-476000) Calling .GetIP
	I0926 17:53:18.535304    4178 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0926 17:53:18.539884    4178 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 17:53:18.549924    4178 kubeadm.go:883] updating cluster {Name:ha-476000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:ha-476000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0926 17:53:18.550017    4178 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 17:53:18.550087    4178 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0926 17:53:18.562413    4178 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0926 17:53:18.562429    4178 docker.go:615] Images already preloaded, skipping extraction
	I0926 17:53:18.562517    4178 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0926 17:53:18.574107    4178 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0926 17:53:18.574127    4178 cache_images.go:84] Images are preloaded, skipping loading
	I0926 17:53:18.574137    4178 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.1 docker true true} ...
	I0926 17:53:18.574213    4178 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-476000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-476000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0926 17:53:18.574296    4178 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0926 17:53:18.611557    4178 cni.go:84] Creating CNI manager for ""
	I0926 17:53:18.611571    4178 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0926 17:53:18.611586    4178 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0926 17:53:18.611607    4178 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-476000 NodeName:ha-476000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0926 17:53:18.611700    4178 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-476000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0926 17:53:18.611713    4178 kube-vip.go:115] generating kube-vip config ...
	I0926 17:53:18.611769    4178 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0926 17:53:18.624452    4178 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0926 17:53:18.624524    4178 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0926 17:53:18.624583    4178 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0926 17:53:18.632661    4178 binaries.go:44] Found k8s binaries, skipping transfer
	I0926 17:53:18.632722    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0926 17:53:18.640016    4178 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0926 17:53:18.653424    4178 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0926 17:53:18.666861    4178 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0926 17:53:18.680665    4178 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0926 17:53:18.694237    4178 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0926 17:53:18.697273    4178 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 17:53:18.706489    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:18.799127    4178 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 17:53:18.813428    4178 certs.go:68] Setting up /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000 for IP: 192.169.0.5
	I0926 17:53:18.813441    4178 certs.go:194] generating shared ca certs ...
	I0926 17:53:18.813450    4178 certs.go:226] acquiring lock for ca certs: {Name:mk3d4665cb5756ae49ea6f7cd2a58d273aecc507 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:53:18.813627    4178 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.key
	I0926 17:53:18.813697    4178 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.key
	I0926 17:53:18.813709    4178 certs.go:256] generating profile certs ...
	I0926 17:53:18.813816    4178 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/client.key
	I0926 17:53:18.813837    4178 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key.961e0ed9
	I0926 17:53:18.813853    4178 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt.961e0ed9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
	I0926 17:53:19.198737    4178 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt.961e0ed9 ...
	I0926 17:53:19.198759    4178 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt.961e0ed9: {Name:mkf72026f41cf052c5981dfd73bcc3ea46813a38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:53:19.199347    4178 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key.961e0ed9 ...
	I0926 17:53:19.199358    4178 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key.961e0ed9: {Name:mkb6fc9895bd700bb149434e702cedd545112b66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:53:19.199565    4178 certs.go:381] copying /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt.961e0ed9 -> /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt
	I0926 17:53:19.199778    4178 certs.go:385] copying /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key.961e0ed9 -> /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key
	I0926 17:53:19.200020    4178 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.key
	I0926 17:53:19.200030    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0926 17:53:19.200052    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0926 17:53:19.200071    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0926 17:53:19.200089    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0926 17:53:19.200107    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0926 17:53:19.200125    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0926 17:53:19.200142    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0926 17:53:19.200160    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0926 17:53:19.200250    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679.pem (1338 bytes)
	W0926 17:53:19.200297    4178 certs.go:480] ignoring /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679_empty.pem, impossibly tiny 0 bytes
	I0926 17:53:19.200306    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem (1675 bytes)
	I0926 17:53:19.200335    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem (1082 bytes)
	I0926 17:53:19.200365    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem (1123 bytes)
	I0926 17:53:19.200393    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem (1675 bytes)
	I0926 17:53:19.200455    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem (1708 bytes)
	I0926 17:53:19.200488    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679.pem -> /usr/share/ca-certificates/1679.pem
	I0926 17:53:19.200508    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> /usr/share/ca-certificates/16792.pem
	I0926 17:53:19.200526    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:53:19.200943    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0926 17:53:19.229781    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0926 17:53:19.249730    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0926 17:53:19.269922    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0926 17:53:19.290358    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0926 17:53:19.309964    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0926 17:53:19.329782    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0926 17:53:19.349170    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0926 17:53:19.368557    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679.pem --> /usr/share/ca-certificates/1679.pem (1338 bytes)
	I0926 17:53:19.388315    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem --> /usr/share/ca-certificates/16792.pem (1708 bytes)
	I0926 17:53:19.407646    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0926 17:53:19.427156    4178 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0926 17:53:19.441065    4178 ssh_runner.go:195] Run: openssl version
	I0926 17:53:19.445301    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1679.pem && ln -fs /usr/share/ca-certificates/1679.pem /etc/ssl/certs/1679.pem"
	I0926 17:53:19.453728    4178 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1679.pem
	I0926 17:53:19.457317    4178 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:36 /usr/share/ca-certificates/1679.pem
	I0926 17:53:19.457357    4178 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1679.pem
	I0926 17:53:19.461742    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1679.pem /etc/ssl/certs/51391683.0"
	I0926 17:53:19.470198    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16792.pem && ln -fs /usr/share/ca-certificates/16792.pem /etc/ssl/certs/16792.pem"
	I0926 17:53:19.478616    4178 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16792.pem
	I0926 17:53:19.482140    4178 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:36 /usr/share/ca-certificates/16792.pem
	I0926 17:53:19.482201    4178 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16792.pem
	I0926 17:53:19.486473    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16792.pem /etc/ssl/certs/3ec20f2e.0"
	I0926 17:53:19.494777    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0926 17:53:19.503295    4178 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:53:19.506902    4178 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:14 /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:53:19.506943    4178 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:53:19.511360    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0926 17:53:19.519826    4178 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0926 17:53:19.523465    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0926 17:53:19.528006    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0926 17:53:19.532444    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0926 17:53:19.537126    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0926 17:53:19.541512    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0926 17:53:19.545827    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0926 17:53:19.550166    4178 kubeadm.go:392] StartCluster: {Name:ha-476000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-476000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 17:53:19.550298    4178 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0926 17:53:19.561803    4178 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0926 17:53:19.569639    4178 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0926 17:53:19.569650    4178 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0926 17:53:19.569698    4178 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0926 17:53:19.577403    4178 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0926 17:53:19.577718    4178 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-476000" does not appear in /Users/jenkins/minikube-integration/19711-1128/kubeconfig
	I0926 17:53:19.577801    4178 kubeconfig.go:62] /Users/jenkins/minikube-integration/19711-1128/kubeconfig needs updating (will repair): [kubeconfig missing "ha-476000" cluster setting kubeconfig missing "ha-476000" context setting]
	I0926 17:53:19.577967    4178 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1128/kubeconfig: {Name:mkb9c069c578c490d8cb734b05a21821e9215482 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:53:19.578378    4178 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19711-1128/kubeconfig
	I0926 17:53:19.578577    4178 kapi.go:59] client config for ha-476000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/client.key", CAFile:"/Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7459f00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0926 17:53:19.578890    4178 cert_rotation.go:140] Starting client certificate rotation controller
	I0926 17:53:19.579075    4178 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0926 17:53:19.586457    4178 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0926 17:53:19.586468    4178 kubeadm.go:597] duration metric: took 16.814329ms to restartPrimaryControlPlane
	I0926 17:53:19.586474    4178 kubeadm.go:394] duration metric: took 36.313109ms to StartCluster
	I0926 17:53:19.586484    4178 settings.go:142] acquiring lock: {Name:mka8948d0f70add5c5f20f2eca7124a97a496c1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:53:19.586556    4178 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19711-1128/kubeconfig
	I0926 17:53:19.586877    4178 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1128/kubeconfig: {Name:mkb9c069c578c490d8cb734b05a21821e9215482 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:53:19.587096    4178 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 17:53:19.587108    4178 start.go:241] waiting for startup goroutines ...
	I0926 17:53:19.587128    4178 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0926 17:53:19.587252    4178 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:53:19.629430    4178 out.go:177] * Enabled addons: 
	I0926 17:53:19.650423    4178 addons.go:510] duration metric: took 63.269239ms for enable addons: enabled=[]
	I0926 17:53:19.650464    4178 start.go:246] waiting for cluster config update ...
	I0926 17:53:19.650475    4178 start.go:255] writing updated cluster config ...
	I0926 17:53:19.672508    4178 out.go:201] 
	I0926 17:53:19.693989    4178 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:53:19.694118    4178 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/config.json ...
	I0926 17:53:19.716427    4178 out.go:177] * Starting "ha-476000-m02" control-plane node in "ha-476000" cluster
	I0926 17:53:19.758555    4178 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 17:53:19.758588    4178 cache.go:56] Caching tarball of preloaded images
	I0926 17:53:19.758767    4178 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0926 17:53:19.758785    4178 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 17:53:19.758898    4178 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/config.json ...
	I0926 17:53:19.759817    4178 start.go:360] acquireMachinesLock for ha-476000-m02: {Name:mk62b0ec9b6170d1971239573141a625c4a2621e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 17:53:19.759922    4178 start.go:364] duration metric: took 80.364µs to acquireMachinesLock for "ha-476000-m02"
	I0926 17:53:19.759947    4178 start.go:96] Skipping create...Using existing machine configuration
	I0926 17:53:19.759956    4178 fix.go:54] fixHost starting: m02
	I0926 17:53:19.760406    4178 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:53:19.760442    4178 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:53:19.769605    4178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52063
	I0926 17:53:19.770014    4178 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:53:19.770353    4178 main.go:141] libmachine: Using API Version  1
	I0926 17:53:19.770365    4178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:53:19.770608    4178 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:53:19.770743    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	I0926 17:53:19.770835    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetState
	I0926 17:53:19.770922    4178 main.go:141] libmachine: (ha-476000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:53:19.771000    4178 main.go:141] libmachine: (ha-476000-m02) DBG | hyperkit pid from json: 4002
	I0926 17:53:19.771916    4178 main.go:141] libmachine: (ha-476000-m02) DBG | hyperkit pid 4002 missing from process table
	I0926 17:53:19.771940    4178 fix.go:112] recreateIfNeeded on ha-476000-m02: state=Stopped err=<nil>
	I0926 17:53:19.771957    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	W0926 17:53:19.772037    4178 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 17:53:19.814436    4178 out.go:177] * Restarting existing hyperkit VM for "ha-476000-m02" ...
	I0926 17:53:19.835535    4178 main.go:141] libmachine: (ha-476000-m02) Calling .Start
	I0926 17:53:19.835810    4178 main.go:141] libmachine: (ha-476000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:53:19.835874    4178 main.go:141] libmachine: (ha-476000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/hyperkit.pid
	I0926 17:53:19.837665    4178 main.go:141] libmachine: (ha-476000-m02) DBG | hyperkit pid 4002 missing from process table
	I0926 17:53:19.837678    4178 main.go:141] libmachine: (ha-476000-m02) DBG | pid 4002 is in state "Stopped"
	I0926 17:53:19.837694    4178 main.go:141] libmachine: (ha-476000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/hyperkit.pid...
	I0926 17:53:19.838041    4178 main.go:141] libmachine: (ha-476000-m02) DBG | Using UUID 58f499c4-942a-445b-bae0-ab27a7b8106e
	I0926 17:53:19.865707    4178 main.go:141] libmachine: (ha-476000-m02) DBG | Generated MAC 9e:5:36:80:93:e3
	I0926 17:53:19.865728    4178 main.go:141] libmachine: (ha-476000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000
	I0926 17:53:19.865872    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"58f499c4-942a-445b-bae0-ab27a7b8106e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aca80)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0926 17:53:19.865901    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"58f499c4-942a-445b-bae0-ab27a7b8106e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aca80)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0926 17:53:19.865946    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "58f499c4-942a-445b-bae0-ab27a7b8106e", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/ha-476000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/bzimage,/Users/jenkins/minikube-integration/19711-1128/.minikube/machine
s/ha-476000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000"}
	I0926 17:53:19.866020    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 58f499c4-942a-445b-bae0-ab27a7b8106e -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/ha-476000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/bzimage,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000"
	I0926 17:53:19.866041    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0926 17:53:19.867306    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 DEBUG: hyperkit: Pid is 4198
	I0926 17:53:19.867704    4178 main.go:141] libmachine: (ha-476000-m02) DBG | Attempt 0
	I0926 17:53:19.867718    4178 main.go:141] libmachine: (ha-476000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:53:19.867787    4178 main.go:141] libmachine: (ha-476000-m02) DBG | hyperkit pid from json: 4198
	I0926 17:53:19.869727    4178 main.go:141] libmachine: (ha-476000-m02) DBG | Searching for 9e:5:36:80:93:e3 in /var/db/dhcpd_leases ...
	I0926 17:53:19.869759    4178 main.go:141] libmachine: (ha-476000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0926 17:53:19.869772    4178 main.go:141] libmachine: (ha-476000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 17:53:19.869793    4178 main.go:141] libmachine: (ha-476000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 17:53:19.869821    4178 main.go:141] libmachine: (ha-476000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f751f8}
	I0926 17:53:19.869834    4178 main.go:141] libmachine: (ha-476000-m02) DBG | Found match: 9e:5:36:80:93:e3
	I0926 17:53:19.869848    4178 main.go:141] libmachine: (ha-476000-m02) DBG | IP: 192.169.0.6
	I0926 17:53:19.869914    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetConfigRaw
	I0926 17:53:19.870579    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetIP
	I0926 17:53:19.870762    4178 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/config.json ...
	I0926 17:53:19.871158    4178 machine.go:93] provisionDockerMachine start ...
	I0926 17:53:19.871172    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	I0926 17:53:19.871294    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:19.871392    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:19.871530    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:19.871631    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:19.871718    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:19.871893    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:19.872031    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0926 17:53:19.872038    4178 main.go:141] libmachine: About to run SSH command:
	hostname
	I0926 17:53:19.875766    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I0926 17:53:19.884496    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0926 17:53:19.885379    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 17:53:19.885391    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 17:53:19.885398    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 17:53:19.885403    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 17:53:20.270703    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:20 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0926 17:53:20.270718    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:20 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0926 17:53:20.385412    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 17:53:20.385431    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 17:53:20.385441    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 17:53:20.385468    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 17:53:20.386358    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:20 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0926 17:53:20.386369    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:20 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0926 17:53:25.988386    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:25 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0926 17:53:25.988424    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:25 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0926 17:53:25.988435    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:25 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0926 17:53:26.012163    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:26 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0926 17:53:30.140708    4178 main.go:141] libmachine: Error dialing TCP: dial tcp 192.169.0.6:22: connect: connection refused
	I0926 17:53:33.199866    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0926 17:53:33.199881    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetMachineName
	I0926 17:53:33.200004    4178 buildroot.go:166] provisioning hostname "ha-476000-m02"
	I0926 17:53:33.200013    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetMachineName
	I0926 17:53:33.200123    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:33.200213    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:33.200322    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.200426    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.200540    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:33.200702    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:33.200858    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0926 17:53:33.200867    4178 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-476000-m02 && echo "ha-476000-m02" | sudo tee /etc/hostname
	I0926 17:53:33.269037    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-476000-m02
	
	I0926 17:53:33.269056    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:33.269193    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:33.269285    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.269368    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.269450    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:33.269573    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:33.269735    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0926 17:53:33.269746    4178 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-476000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-476000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-476000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0926 17:53:33.331289    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 17:53:33.331305    4178 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19711-1128/.minikube CaCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19711-1128/.minikube}
	I0926 17:53:33.331314    4178 buildroot.go:174] setting up certificates
	I0926 17:53:33.331321    4178 provision.go:84] configureAuth start
	I0926 17:53:33.331328    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetMachineName
	I0926 17:53:33.331463    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetIP
	I0926 17:53:33.331556    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:33.331643    4178 provision.go:143] copyHostCerts
	I0926 17:53:33.331674    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem
	I0926 17:53:33.331734    4178 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem, removing ...
	I0926 17:53:33.331740    4178 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem
	I0926 17:53:33.331856    4178 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem (1082 bytes)
	I0926 17:53:33.332044    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem
	I0926 17:53:33.332093    4178 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem, removing ...
	I0926 17:53:33.332098    4178 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem
	I0926 17:53:33.332176    4178 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem (1123 bytes)
	I0926 17:53:33.332314    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem
	I0926 17:53:33.332352    4178 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem, removing ...
	I0926 17:53:33.332356    4178 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem
	I0926 17:53:33.332427    4178 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem (1675 bytes)
	I0926 17:53:33.332570    4178 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem org=jenkins.ha-476000-m02 san=[127.0.0.1 192.169.0.6 ha-476000-m02 localhost minikube]
	I0926 17:53:33.395607    4178 provision.go:177] copyRemoteCerts
	I0926 17:53:33.395696    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0926 17:53:33.395715    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:33.395906    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:33.396015    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.396100    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:33.396196    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/id_rsa Username:docker}
	I0926 17:53:33.431740    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0926 17:53:33.431806    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0926 17:53:33.452053    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0926 17:53:33.452106    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0926 17:53:33.471760    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0926 17:53:33.471825    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0926 17:53:33.490896    4178 provision.go:87] duration metric: took 159.567474ms to configureAuth
	I0926 17:53:33.490910    4178 buildroot.go:189] setting minikube options for container-runtime
	I0926 17:53:33.491086    4178 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:53:33.491099    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	I0926 17:53:33.491231    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:33.491321    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:33.491413    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.491498    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.491591    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:33.491713    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:33.491847    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0926 17:53:33.491854    4178 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0926 17:53:33.547403    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0926 17:53:33.547417    4178 buildroot.go:70] root file system type: tmpfs
	I0926 17:53:33.547504    4178 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0926 17:53:33.547518    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:33.547665    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:33.547775    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.547896    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.547997    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:33.548125    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:33.548268    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0926 17:53:33.548312    4178 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0926 17:53:33.613348    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0926 17:53:33.613367    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:33.613495    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:33.613582    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.613661    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.613747    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:33.613879    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:33.614018    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0926 17:53:33.614033    4178 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0926 17:53:35.261247    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0926 17:53:35.261262    4178 machine.go:96] duration metric: took 15.390039559s to provisionDockerMachine
	I0926 17:53:35.261270    4178 start.go:293] postStartSetup for "ha-476000-m02" (driver="hyperkit")
	I0926 17:53:35.261294    4178 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0926 17:53:35.261308    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	I0926 17:53:35.261509    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0926 17:53:35.261522    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:35.261612    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:35.261704    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:35.261809    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:35.261922    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/id_rsa Username:docker}
	I0926 17:53:35.302268    4178 ssh_runner.go:195] Run: cat /etc/os-release
	I0926 17:53:35.305656    4178 info.go:137] Remote host: Buildroot 2023.02.9
	I0926 17:53:35.305666    4178 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1128/.minikube/addons for local assets ...
	I0926 17:53:35.305765    4178 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1128/.minikube/files for local assets ...
	I0926 17:53:35.305947    4178 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> 16792.pem in /etc/ssl/certs
	I0926 17:53:35.305953    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> /etc/ssl/certs/16792.pem
	I0926 17:53:35.306171    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0926 17:53:35.314020    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem --> /etc/ssl/certs/16792.pem (1708 bytes)
	I0926 17:53:35.344643    4178 start.go:296] duration metric: took 83.349532ms for postStartSetup
	I0926 17:53:35.344681    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	I0926 17:53:35.344863    4178 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0926 17:53:35.344877    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:35.344965    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:35.345056    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:35.345137    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:35.345223    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/id_rsa Username:docker}
	I0926 17:53:35.381164    4178 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0926 17:53:35.381229    4178 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0926 17:53:35.414571    4178 fix.go:56] duration metric: took 15.654555871s for fixHost
	I0926 17:53:35.414597    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:35.414747    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:35.414839    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:35.414932    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:35.415022    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:35.415156    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:35.415295    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0926 17:53:35.415302    4178 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0926 17:53:35.472100    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727398415.586409353
	
	I0926 17:53:35.472129    4178 fix.go:216] guest clock: 1727398415.586409353
	I0926 17:53:35.472134    4178 fix.go:229] Guest: 2024-09-26 17:53:35.586409353 -0700 PDT Remote: 2024-09-26 17:53:35.414586 -0700 PDT m=+34.982399519 (delta=171.823353ms)
	I0926 17:53:35.472150    4178 fix.go:200] guest clock delta is within tolerance: 171.823353ms
	I0926 17:53:35.472153    4178 start.go:83] releasing machines lock for "ha-476000-m02", held for 15.712162695s
	I0926 17:53:35.472170    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	I0926 17:53:35.472305    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetIP
	I0926 17:53:35.513568    4178 out.go:177] * Found network options:
	I0926 17:53:35.535552    4178 out.go:177]   - NO_PROXY=192.169.0.5
	W0926 17:53:35.557416    4178 proxy.go:119] fail to check proxy env: Error ip not in block
	I0926 17:53:35.557455    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	I0926 17:53:35.558341    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	I0926 17:53:35.558579    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	I0926 17:53:35.558709    4178 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0926 17:53:35.558764    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	W0926 17:53:35.558835    4178 proxy.go:119] fail to check proxy env: Error ip not in block
	I0926 17:53:35.558964    4178 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0926 17:53:35.558985    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:35.559000    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:35.559215    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:35.559232    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:35.559433    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:35.559464    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:35.559662    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:35.559681    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/id_rsa Username:docker}
	I0926 17:53:35.559790    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/id_rsa Username:docker}
	W0926 17:53:35.596059    4178 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0926 17:53:35.596139    4178 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0926 17:53:35.610162    4178 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0926 17:53:35.610178    4178 start.go:495] detecting cgroup driver to use...
	I0926 17:53:35.610237    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 17:53:35.646709    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0926 17:53:35.656640    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0926 17:53:35.665578    4178 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0926 17:53:35.665623    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0926 17:53:35.674574    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 17:53:35.683489    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0926 17:53:35.692471    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 17:53:35.701275    4178 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0926 17:53:35.710401    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0926 17:53:35.719421    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0926 17:53:35.728448    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0926 17:53:35.738067    4178 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0926 17:53:35.746743    4178 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0926 17:53:35.746802    4178 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0926 17:53:35.755939    4178 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0926 17:53:35.763977    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:35.862563    4178 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0926 17:53:35.881531    4178 start.go:495] detecting cgroup driver to use...
	I0926 17:53:35.881616    4178 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0926 17:53:35.899471    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 17:53:35.910823    4178 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0926 17:53:35.923558    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 17:53:35.935946    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 17:53:35.946007    4178 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0926 17:53:35.969898    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 17:53:35.980115    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 17:53:35.995271    4178 ssh_runner.go:195] Run: which cri-dockerd
	I0926 17:53:35.998508    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0926 17:53:36.005810    4178 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0926 17:53:36.019492    4178 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0926 17:53:36.116976    4178 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0926 17:53:36.228090    4178 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0926 17:53:36.228117    4178 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0926 17:53:36.242164    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:36.335597    4178 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0926 17:53:38.678847    4178 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.343223137s)
	I0926 17:53:38.678917    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0926 17:53:38.689531    4178 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0926 17:53:38.702816    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0926 17:53:38.713151    4178 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0926 17:53:38.819068    4178 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0926 17:53:38.926667    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:39.040074    4178 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0926 17:53:39.054197    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0926 17:53:39.065256    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:39.163219    4178 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0926 17:53:39.228416    4178 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0926 17:53:39.228518    4178 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0926 17:53:39.233191    4178 start.go:563] Will wait 60s for crictl version
	I0926 17:53:39.233249    4178 ssh_runner.go:195] Run: which crictl
	I0926 17:53:39.236580    4178 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0926 17:53:39.262407    4178 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I0926 17:53:39.262495    4178 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0926 17:53:39.279010    4178 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0926 17:53:39.317905    4178 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I0926 17:53:39.359545    4178 out.go:177]   - env NO_PROXY=192.169.0.5
	I0926 17:53:39.381103    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetIP
	I0926 17:53:39.381320    4178 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0926 17:53:39.384579    4178 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 17:53:39.394395    4178 mustload.go:65] Loading cluster: ha-476000
	I0926 17:53:39.394560    4178 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:53:39.394810    4178 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:53:39.394834    4178 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:53:39.403482    4178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52086
	I0926 17:53:39.403823    4178 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:53:39.404150    4178 main.go:141] libmachine: Using API Version  1
	I0926 17:53:39.404164    4178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:53:39.404434    4178 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:53:39.404542    4178 main.go:141] libmachine: (ha-476000) Calling .GetState
	I0926 17:53:39.404632    4178 main.go:141] libmachine: (ha-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:53:39.404706    4178 main.go:141] libmachine: (ha-476000) DBG | hyperkit pid from json: 4191
	I0926 17:53:39.405678    4178 host.go:66] Checking if "ha-476000" exists ...
	I0926 17:53:39.405956    4178 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:53:39.405986    4178 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:53:39.414686    4178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52088
	I0926 17:53:39.415056    4178 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:53:39.415379    4178 main.go:141] libmachine: Using API Version  1
	I0926 17:53:39.415388    4178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:53:39.415605    4178 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:53:39.415728    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:39.415830    4178 certs.go:68] Setting up /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000 for IP: 192.169.0.6
	I0926 17:53:39.415836    4178 certs.go:194] generating shared ca certs ...
	I0926 17:53:39.415849    4178 certs.go:226] acquiring lock for ca certs: {Name:mk3d4665cb5756ae49ea6f7cd2a58d273aecc507 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:53:39.416032    4178 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.key
	I0926 17:53:39.416108    4178 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.key
	I0926 17:53:39.416119    4178 certs.go:256] generating profile certs ...
	I0926 17:53:39.416243    4178 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/client.key
	I0926 17:53:39.416331    4178 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key.462632c0
	I0926 17:53:39.416399    4178 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.key
	I0926 17:53:39.416406    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0926 17:53:39.416427    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0926 17:53:39.416446    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0926 17:53:39.416465    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0926 17:53:39.416482    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0926 17:53:39.416510    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0926 17:53:39.416544    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0926 17:53:39.416564    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0926 17:53:39.416666    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679.pem (1338 bytes)
	W0926 17:53:39.416716    4178 certs.go:480] ignoring /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679_empty.pem, impossibly tiny 0 bytes
	I0926 17:53:39.416725    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem (1675 bytes)
	I0926 17:53:39.416762    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem (1082 bytes)
	I0926 17:53:39.416795    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem (1123 bytes)
	I0926 17:53:39.416828    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem (1675 bytes)
	I0926 17:53:39.416893    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem (1708 bytes)
	I0926 17:53:39.416929    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:53:39.416949    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679.pem -> /usr/share/ca-certificates/1679.pem
	I0926 17:53:39.416967    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> /usr/share/ca-certificates/16792.pem
	I0926 17:53:39.416991    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:39.417078    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:39.417153    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:39.417237    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:39.417320    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/id_rsa Username:docker}
	I0926 17:53:39.447975    4178 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0926 17:53:39.451073    4178 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0926 17:53:39.458912    4178 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0926 17:53:39.462003    4178 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0926 17:53:39.470783    4178 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0926 17:53:39.473836    4178 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0926 17:53:39.481537    4178 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0926 17:53:39.484645    4178 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0926 17:53:39.492945    4178 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0926 17:53:39.495978    4178 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0926 17:53:39.503610    4178 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0926 17:53:39.506808    4178 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0926 17:53:39.514787    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0926 17:53:39.534891    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0926 17:53:39.554745    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0926 17:53:39.574668    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0926 17:53:39.594523    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0926 17:53:39.614131    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0926 17:53:39.633606    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0926 17:53:39.653376    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0926 17:53:39.673369    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0926 17:53:39.692952    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679.pem --> /usr/share/ca-certificates/1679.pem (1338 bytes)
	I0926 17:53:39.712634    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem --> /usr/share/ca-certificates/16792.pem (1708 bytes)
	I0926 17:53:39.732005    4178 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0926 17:53:39.745464    4178 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0926 17:53:39.759232    4178 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0926 17:53:39.772911    4178 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0926 17:53:39.786441    4178 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0926 17:53:39.800266    4178 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0926 17:53:39.813927    4178 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0926 17:53:39.827332    4178 ssh_runner.go:195] Run: openssl version
	I0926 17:53:39.831566    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16792.pem && ln -fs /usr/share/ca-certificates/16792.pem /etc/ssl/certs/16792.pem"
	I0926 17:53:39.839850    4178 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16792.pem
	I0926 17:53:39.843163    4178 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:36 /usr/share/ca-certificates/16792.pem
	I0926 17:53:39.843206    4178 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16792.pem
	I0926 17:53:39.847374    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16792.pem /etc/ssl/certs/3ec20f2e.0"
	I0926 17:53:39.855624    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0926 17:53:39.863965    4178 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:53:39.867400    4178 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:14 /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:53:39.867452    4178 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:53:39.871715    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0926 17:53:39.879907    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1679.pem && ln -fs /usr/share/ca-certificates/1679.pem /etc/ssl/certs/1679.pem"
	I0926 17:53:39.888247    4178 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1679.pem
	I0926 17:53:39.891606    4178 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:36 /usr/share/ca-certificates/1679.pem
	I0926 17:53:39.891654    4178 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1679.pem
	I0926 17:53:39.895855    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1679.pem /etc/ssl/certs/51391683.0"
	I0926 17:53:39.904043    4178 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0926 17:53:39.907450    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0926 17:53:39.911778    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0926 17:53:39.915909    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0926 17:53:39.920037    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0926 17:53:39.924167    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0926 17:53:39.928372    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0926 17:53:39.932543    4178 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.1 docker true true} ...
	I0926 17:53:39.932604    4178 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-476000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-476000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0926 17:53:39.932624    4178 kube-vip.go:115] generating kube-vip config ...
	I0926 17:53:39.932670    4178 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0926 17:53:39.944715    4178 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0926 17:53:39.944753    4178 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0926 17:53:39.944822    4178 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0926 17:53:39.953541    4178 binaries.go:44] Found k8s binaries, skipping transfer
	I0926 17:53:39.953597    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0926 17:53:39.961618    4178 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0926 17:53:39.975007    4178 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0926 17:53:39.988472    4178 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0926 17:53:40.002021    4178 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0926 17:53:40.004933    4178 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 17:53:40.015059    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:40.118867    4178 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 17:53:40.133377    4178 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 17:53:40.133568    4178 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:53:40.154757    4178 out.go:177] * Verifying Kubernetes components...
	I0926 17:53:40.196346    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:40.323445    4178 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 17:53:40.338817    4178 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19711-1128/kubeconfig
	I0926 17:53:40.339037    4178 kapi.go:59] client config for ha-476000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/client.key", CAFile:"/Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7459f00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0926 17:53:40.339084    4178 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0926 17:53:40.339280    4178 node_ready.go:35] waiting up to 6m0s for node "ha-476000-m02" to be "Ready" ...
	I0926 17:53:40.339354    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:40.339359    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:40.339366    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:40.339369    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:47.201921    4178 round_trippers.go:574] Response Status:  in 6862 milliseconds
	I0926 17:53:48.202681    4178 with_retry.go:234] Got a Retry-After 1s response for attempt 1 to https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:48.202709    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:48.202713    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:48.202720    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:48.202724    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:49.203128    4178 round_trippers.go:574] Response Status:  in 1000 milliseconds
	I0926 17:53:49.203194    4178 node_ready.go:53] error getting node "ha-476000-m02": Get "https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.1:52091->192.169.0.5:8443: read: connection reset by peer
	I0926 17:53:49.203240    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:49.203247    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:49.203252    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:49.203256    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:50.204478    4178 round_trippers.go:574] Response Status:  in 1001 milliseconds
	I0926 17:53:50.204619    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:50.204631    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:50.204642    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:50.204649    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:51.204974    4178 round_trippers.go:574] Response Status:  in 1000 milliseconds
	I0926 17:53:51.205045    4178 node_ready.go:53] error getting node "ha-476000-m02": Get "https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02": dial tcp 192.169.0.5:8443: connect: connection refused
	I0926 17:53:51.205098    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:51.205108    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:51.205118    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:51.205124    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:52.205352    4178 round_trippers.go:574] Response Status:  in 1000 milliseconds
	I0926 17:53:52.205474    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:52.205485    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:52.205496    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:52.205505    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:53.206703    4178 round_trippers.go:574] Response Status:  in 1001 milliseconds
	I0926 17:53:53.206766    4178 node_ready.go:53] error getting node "ha-476000-m02": Get "https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02": dial tcp 192.169.0.5:8443: connect: connection refused
	I0926 17:53:53.206822    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:53.206831    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:53.206843    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:53.206849    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:54.208032    4178 round_trippers.go:574] Response Status:  in 1001 milliseconds
	I0926 17:53:54.208160    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:54.208172    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:54.208183    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:54.208190    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:55.208420    4178 round_trippers.go:574] Response Status:  in 1000 milliseconds
	I0926 17:53:55.208484    4178 node_ready.go:53] error getting node "ha-476000-m02": Get "https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02": dial tcp 192.169.0.5:8443: connect: connection refused
	I0926 17:53:55.208561    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:55.208572    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:55.208582    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:55.208586    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:56.209388    4178 round_trippers.go:574] Response Status:  in 1000 milliseconds
	I0926 17:53:56.209496    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:56.209507    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:56.209517    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:56.209529    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:57.211492    4178 round_trippers.go:574] Response Status:  in 1001 milliseconds
	I0926 17:53:57.211560    4178 node_ready.go:53] error getting node "ha-476000-m02": Get "https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02": dial tcp 192.169.0.5:8443: connect: connection refused
	I0926 17:53:57.211643    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:57.211654    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:57.211665    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:57.211671    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:58.213441    4178 round_trippers.go:574] Response Status:  in 1001 milliseconds
	I0926 17:53:58.213520    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:58.213528    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:58.213535    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:58.213538    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:59.215627    4178 round_trippers.go:574] Response Status:  in 1002 milliseconds
	I0926 17:53:59.215689    4178 node_ready.go:53] error getting node "ha-476000-m02": Get "https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02": dial tcp 192.169.0.5:8443: connect: connection refused
	I0926 17:53:59.215761    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:59.215770    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:59.215781    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:59.215792    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:00.214970    4178 round_trippers.go:574] Response Status:  in 999 milliseconds
	I0926 17:54:00.215057    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:00.215066    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:00.215072    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:00.215075    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:02.766651    4178 round_trippers.go:574] Response Status: 200 OK in 2551 milliseconds
	I0926 17:54:02.767320    4178 node_ready.go:53] node "ha-476000-m02" has status "Ready":"False"
	I0926 17:54:02.767364    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:02.767371    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:02.767378    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:02.767382    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:02.808455    4178 round_trippers.go:574] Response Status: 200 OK in 41 milliseconds
	I0926 17:54:02.839499    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:02.839515    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:02.839522    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:02.839524    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:02.844502    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:03.339950    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:03.339974    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:03.340014    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:03.340033    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:03.343931    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:03.839836    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:03.839849    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:03.839855    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:03.839859    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:03.842811    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:04.340378    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:04.340403    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:04.340414    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:04.340421    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:04.344418    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:04.839736    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:04.839752    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:04.839758    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:04.839762    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:04.842629    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:04.843116    4178 node_ready.go:49] node "ha-476000-m02" has status "Ready":"True"
	I0926 17:54:04.843129    4178 node_ready.go:38] duration metric: took 24.503742617s for node "ha-476000-m02" to be "Ready" ...
	I0926 17:54:04.843136    4178 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0926 17:54:04.843170    4178 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0926 17:54:04.843178    4178 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0926 17:54:04.843227    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0926 17:54:04.843232    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:04.843238    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:04.843242    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:04.851447    4178 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0926 17:54:04.858185    4178 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:04.858238    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:04.858243    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:04.858250    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:04.858254    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:04.860121    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:04.860597    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:04.860608    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:04.860614    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:04.860619    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:04.862704    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:05.358322    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:05.358334    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:05.358341    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:05.358344    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:05.361386    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:05.361939    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:05.361947    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:05.361954    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:05.361958    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:05.366335    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:05.858443    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:05.858462    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:05.858485    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:05.858489    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:05.861181    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:05.861691    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:05.861698    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:05.861704    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:05.861706    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:05.863911    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:06.359311    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:06.359342    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:06.359350    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:06.359354    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:06.362329    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:06.362841    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:06.362848    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:06.362854    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:06.362864    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:06.365951    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:06.860115    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:06.860140    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:06.860152    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:06.860192    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:06.863829    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:06.864356    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:06.864364    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:06.864370    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:06.864372    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:06.866293    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:06.866641    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:07.359755    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:07.359781    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:07.359791    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:07.359796    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:07.362929    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:07.363432    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:07.363440    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:07.363449    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:07.363454    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:07.365354    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:07.859403    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:07.859428    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:07.859440    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:07.859447    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:07.863936    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:07.864482    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:07.864489    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:07.864494    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:07.864497    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:07.866695    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:08.359070    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:08.359095    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:08.359104    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:08.359110    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:08.363413    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:08.363975    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:08.363983    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:08.363989    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:08.363996    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:08.366160    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:08.858562    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:08.858596    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:08.858604    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:08.858608    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:08.861584    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:08.862306    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:08.862313    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:08.862319    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:08.862329    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:08.864555    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:09.359666    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:09.359694    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:09.359706    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:09.359710    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:09.364444    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:09.364796    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:09.364802    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:09.364808    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:09.364812    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:09.367017    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:09.367391    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:09.859578    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:09.859628    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:09.859645    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:09.859654    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:09.863289    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:09.863926    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:09.863934    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:09.863940    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:09.863942    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:09.865998    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:10.358368    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:10.358385    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:10.358391    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:10.358396    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:10.366195    4178 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0926 17:54:10.366734    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:10.366743    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:10.366752    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:10.366755    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:10.369544    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:10.859656    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:10.859683    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:10.859694    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:10.859701    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:10.864043    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:10.864491    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:10.864499    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:10.864504    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:10.864508    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:10.866558    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:11.360000    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:11.360026    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:11.360038    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:11.360045    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:11.364064    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:11.364604    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:11.364611    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:11.364617    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:11.364620    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:11.366561    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:11.859988    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:11.860011    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:11.860023    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:11.860028    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:11.863780    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:11.864488    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:11.864496    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:11.864502    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:11.864505    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:11.866527    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:11.866879    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:12.359231    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:12.359302    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:12.359317    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:12.359325    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:12.363142    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:12.363807    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:12.363815    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:12.363820    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:12.363823    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:12.365720    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:12.859295    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:12.859321    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:12.859332    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:12.859336    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:12.863604    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:12.864232    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:12.864243    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:12.864249    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:12.864252    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:12.866340    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:13.360473    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:13.360500    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:13.360511    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:13.360516    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:13.364925    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:13.365659    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:13.365667    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:13.365672    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:13.365677    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:13.367805    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:13.858451    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:13.858477    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:13.858490    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:13.858495    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:13.862381    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:13.862921    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:13.862929    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:13.862934    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:13.862938    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:13.864941    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:14.358942    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:14.358966    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:14.359005    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:14.359013    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:14.365723    4178 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0926 17:54:14.366181    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:14.366189    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:14.366193    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:14.366197    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:14.368552    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:14.368954    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:14.860475    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:14.860501    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:14.860543    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:14.860550    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:14.864207    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:14.864620    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:14.864628    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:14.864634    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:14.864637    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:14.866896    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:15.358734    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:15.358751    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:15.358757    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:15.358761    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:15.361477    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:15.362047    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:15.362056    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:15.362062    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:15.362072    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:15.364404    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:15.859641    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:15.859669    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:15.859681    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:15.859690    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:15.864301    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:15.864755    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:15.864762    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:15.864767    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:15.864771    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:15.866941    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:16.358689    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:16.358713    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:16.358762    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:16.358771    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:16.363038    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:16.363637    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:16.363644    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:16.363649    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:16.363665    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:16.365580    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:16.858829    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:16.858848    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:16.858857    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:16.858864    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:16.861418    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:16.861895    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:16.861903    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:16.861908    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:16.861913    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:16.864330    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:16.864660    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:17.358538    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:17.358557    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:17.358576    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:17.358581    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:17.361634    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:17.362216    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:17.362224    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:17.362230    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:17.362235    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:17.364368    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:17.858951    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:17.859025    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:17.859068    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:17.859083    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:17.863132    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:17.863643    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:17.863651    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:17.863660    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:17.863665    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:17.865816    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:18.358377    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:18.358396    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:18.358403    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:18.358429    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:18.364859    4178 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0926 17:54:18.365288    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:18.365296    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:18.365303    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:18.365306    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:18.367423    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:18.859211    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:18.859237    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:18.859250    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:18.859257    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:18.863321    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:18.863832    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:18.863840    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:18.863846    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:18.863849    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:18.865860    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:18.866261    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:19.358438    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:19.358453    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:19.358460    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:19.358463    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:19.361068    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:19.361685    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:19.361694    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:19.361700    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:19.361703    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:19.364079    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:19.859935    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:19.859961    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:19.859972    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:19.859979    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:19.864189    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:19.864623    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:19.864630    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:19.864638    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:19.864641    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:19.866680    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:20.359100    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:20.359154    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:20.359164    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:20.359169    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:20.362081    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:20.362587    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:20.362595    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:20.362601    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:20.362604    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:20.364581    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:20.860535    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:20.860561    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:20.860573    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:20.860581    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:20.864595    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:20.865051    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:20.865063    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:20.865070    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:20.865074    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:20.866939    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:20.867377    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:21.358839    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:21.358864    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:21.358910    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:21.358919    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:21.362304    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:21.362899    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:21.362907    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:21.362913    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:21.362923    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:21.364904    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:21.859198    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:21.859224    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:21.859235    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:21.859244    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:21.863464    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:21.863902    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:21.863911    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:21.863916    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:21.863920    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:21.866008    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:22.358500    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:22.358557    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:22.358567    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:22.358581    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:22.363039    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:22.363487    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:22.363495    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:22.363501    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:22.363504    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:22.365560    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:22.860486    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:22.860511    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:22.860523    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:22.860549    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:22.865059    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:22.865691    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:22.865699    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:22.865705    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:22.865708    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:22.867780    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:22.868136    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:23.358997    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:23.359023    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:23.359035    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:23.359043    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:23.363268    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:23.363930    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:23.363938    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:23.363944    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:23.363948    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:23.365982    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:23.858407    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:23.858421    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:23.858452    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:23.858457    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:23.861385    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:23.861801    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:23.861812    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:23.861818    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:23.861823    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:23.864061    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:24.360526    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:24.360553    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:24.360565    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:24.360571    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:24.364721    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:24.365349    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:24.365356    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:24.365362    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:24.365365    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:24.367430    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:24.858605    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:24.858630    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:24.858641    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:24.858648    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:24.862472    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:24.863003    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:24.863010    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:24.863016    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:24.863018    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:24.864908    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:25.358639    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:25.358664    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:25.358677    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:25.358684    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:25.362945    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:25.363487    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:25.363495    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:25.363501    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:25.363503    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:25.365691    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:25.366062    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:25.859315    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:25.859333    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:25.859341    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:25.859364    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:25.862801    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:25.863276    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:25.863284    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:25.863289    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:25.863293    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:25.865685    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:26.359001    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:26.359015    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:26.359021    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:26.359025    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:26.361573    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:26.362094    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:26.362101    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:26.362107    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:26.362111    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:26.364144    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:26.858599    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:26.858625    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:26.858637    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:26.858644    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:26.862247    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:26.862753    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:26.862762    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:26.862767    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:26.862771    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:26.864571    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:27.358862    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:27.358888    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:27.358899    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:27.358904    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:27.363109    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:27.363648    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:27.363657    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:27.363663    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:27.363669    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:27.365500    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:27.859752    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:27.859779    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:27.859790    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:27.859795    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:27.864255    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:27.864725    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:27.864733    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:27.864738    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:27.864741    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:27.866764    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:27.867055    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:28.359808    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:28.359835    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:28.359882    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:28.359890    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:28.363146    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:28.363572    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:28.363579    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:28.363585    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:28.363589    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:28.365498    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:28.858708    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:28.858734    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:28.858746    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:28.858752    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:28.862673    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:28.863231    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:28.863238    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:28.863244    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:28.863248    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:28.865181    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:29.359611    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:29.359640    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:29.359653    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:29.359660    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:29.362965    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:29.363411    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:29.363419    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:29.363425    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:29.363427    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:29.365174    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:29.859384    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:29.859402    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:29.859409    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:29.859414    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:29.862499    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:29.863033    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:29.863041    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:29.863047    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:29.863050    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:29.865154    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:30.359191    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:30.359209    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:30.359255    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:30.359265    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:30.361836    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:30.362303    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:30.362312    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:30.362317    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:30.362320    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:30.364567    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:30.364980    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:30.860033    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:30.860066    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:30.860101    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:30.860109    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:30.864359    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:30.864782    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:30.864790    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:30.864799    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:30.864805    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:30.866798    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:31.358678    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:31.358711    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:31.358762    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:31.358772    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:31.363329    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:31.363731    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:31.363739    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:31.363745    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:31.363751    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:31.365894    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:31.858683    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:31.858706    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:31.858718    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:31.858724    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:31.862717    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:31.863254    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:31.863262    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:31.863268    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:31.863272    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:31.865220    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:32.359370    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:32.359420    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:32.359434    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:32.359442    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:32.362904    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:32.363502    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:32.363510    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:32.363516    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:32.363518    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:32.365729    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:32.366016    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:32.859955    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:32.859990    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:32.859997    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:32.860001    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:32.874510    4178 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0926 17:54:32.875130    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:32.875137    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:32.875142    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:32.875145    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:32.883403    4178 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0926 17:54:33.359964    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:33.360006    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:33.360019    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:33.360025    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:33.362527    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:33.362934    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:33.362942    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:33.362948    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:33.362953    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:33.365277    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:33.860043    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:33.860070    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:33.860082    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:33.860089    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:33.864487    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:33.864960    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:33.864968    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:33.864974    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:33.864978    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:33.866813    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:34.359408    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:34.359422    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:34.359453    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:34.359457    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:34.361843    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:34.362407    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:34.362415    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:34.362419    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:34.362427    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:34.364587    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:34.859087    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:34.859113    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:34.859124    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:34.859132    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:34.863123    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:34.863508    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:34.863516    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:34.863522    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:34.863525    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:34.865516    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:34.865853    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:35.359972    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:35.359997    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:35.360039    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:35.360048    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:35.364311    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:35.364957    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:35.364964    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:35.364970    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:35.364974    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:35.367232    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:35.859251    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:35.859265    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:35.859271    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:35.859275    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:35.861746    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:35.862292    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:35.862304    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:35.862318    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:35.862323    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:35.864289    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:36.360234    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:36.360274    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:36.360284    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:36.360291    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:36.363297    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:36.363726    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:36.363734    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:36.363740    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:36.363743    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:36.365689    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:36.859037    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:36.859105    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:36.859119    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:36.859130    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:36.863205    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:36.863621    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:36.863629    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:36.863635    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:36.863638    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:36.865642    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:36.865933    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:37.359101    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:37.359127    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:37.359139    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:37.359145    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:37.363256    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:37.363851    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:37.363859    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:37.363865    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:37.363868    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:37.365908    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:37.859282    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:37.859308    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:37.859319    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:37.859325    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:37.863341    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:37.863718    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:37.863726    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:37.863731    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:37.863735    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:37.865672    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:38.359013    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:38.359055    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:38.359065    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:38.359070    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:38.361936    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:38.362521    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:38.362529    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:38.362534    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:38.362538    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:38.364699    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:38.859426    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:38.859453    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:38.859466    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:38.859475    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:38.863509    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:38.864012    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:38.864020    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:38.864025    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:38.864029    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:38.866259    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:38.866728    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:39.358730    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:39.358748    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:39.358756    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:39.358765    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:39.362410    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:39.362956    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:39.362964    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:39.362970    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:39.362979    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:39.365004    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:39.858564    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:39.858584    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:39.858592    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:39.858598    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:39.861794    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:39.862200    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:39.862208    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:39.862214    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:39.862219    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:39.864175    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:40.358549    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:40.358586    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.358596    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.358600    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.361533    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:40.362003    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:40.362011    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.362017    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.362020    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.364141    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:40.860048    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:40.860077    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.860087    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.860093    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.863900    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:40.864305    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:40.864314    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.864320    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.864322    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.866266    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:40.866599    4178 pod_ready.go:93] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:40.866610    4178 pod_ready.go:82] duration metric: took 36.008276067s for pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:40.866616    4178 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7jwgv" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:40.866646    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-7jwgv
	I0926 17:54:40.866651    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.866657    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.866661    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.868466    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:40.868930    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:40.868938    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.868944    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.868948    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.870736    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:40.871103    4178 pod_ready.go:93] pod "coredns-7c65d6cfc9-7jwgv" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:40.871111    4178 pod_ready.go:82] duration metric: took 4.489575ms for pod "coredns-7c65d6cfc9-7jwgv" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:40.871118    4178 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-476000" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:40.871146    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-476000
	I0926 17:54:40.871150    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.871156    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.871160    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.873206    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:40.873700    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:40.873707    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.873713    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.873717    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.875461    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:40.875829    4178 pod_ready.go:93] pod "etcd-ha-476000" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:40.875837    4178 pod_ready.go:82] duration metric: took 4.713943ms for pod "etcd-ha-476000" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:40.875844    4178 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-476000-m02" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:40.875875    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-476000-m02
	I0926 17:54:40.875880    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.875885    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.875890    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.877741    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:40.878137    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:40.878145    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.878151    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.878155    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.880023    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:40.880375    4178 pod_ready.go:93] pod "etcd-ha-476000-m02" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:40.880384    4178 pod_ready.go:82] duration metric: took 4.534554ms for pod "etcd-ha-476000-m02" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:40.880390    4178 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-476000-m03" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:40.880419    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-476000-m03
	I0926 17:54:40.880424    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.880429    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.880433    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.882094    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:40.882474    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m03
	I0926 17:54:40.882481    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.882486    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.882496    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.884251    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:40.884613    4178 pod_ready.go:98] node "ha-476000-m03" hosting pod "etcd-ha-476000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:40.884622    4178 pod_ready.go:82] duration metric: took 4.227661ms for pod "etcd-ha-476000-m03" in "kube-system" namespace to be "Ready" ...
	E0926 17:54:40.884628    4178 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-476000-m03" hosting pod "etcd-ha-476000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:40.884638    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-476000" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:41.061560    4178 request.go:632] Waited for 176.87189ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-476000
	I0926 17:54:41.061616    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-476000
	I0926 17:54:41.061655    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:41.061670    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:41.061677    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:41.065303    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:41.262138    4178 request.go:632] Waited for 196.341694ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:41.262261    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:41.262270    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:41.262282    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:41.262290    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:41.266333    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:41.266689    4178 pod_ready.go:93] pod "kube-apiserver-ha-476000" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:41.266699    4178 pod_ready.go:82] duration metric: took 382.053003ms for pod "kube-apiserver-ha-476000" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:41.266705    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-476000-m02" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:41.460472    4178 request.go:632] Waited for 193.723597ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-476000-m02
	I0926 17:54:41.460525    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-476000-m02
	I0926 17:54:41.460535    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:41.460578    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:41.460588    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:41.464471    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:41.661359    4178 request.go:632] Waited for 196.505849ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:41.661462    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:41.661475    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:41.661486    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:41.661494    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:41.665427    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:41.665770    4178 pod_ready.go:93] pod "kube-apiserver-ha-476000-m02" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:41.665780    4178 pod_ready.go:82] duration metric: took 399.068092ms for pod "kube-apiserver-ha-476000-m02" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:41.665789    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-476000-m03" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:41.861535    4178 request.go:632] Waited for 195.701622ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-476000-m03
	I0926 17:54:41.861634    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-476000-m03
	I0926 17:54:41.861648    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:41.861668    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:41.861680    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:41.865792    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:42.061777    4178 request.go:632] Waited for 195.542882ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000-m03
	I0926 17:54:42.061836    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m03
	I0926 17:54:42.061869    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:42.061880    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:42.061888    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:42.066352    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:42.066752    4178 pod_ready.go:98] node "ha-476000-m03" hosting pod "kube-apiserver-ha-476000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:42.066763    4178 pod_ready.go:82] duration metric: took 400.967857ms for pod "kube-apiserver-ha-476000-m03" in "kube-system" namespace to be "Ready" ...
	E0926 17:54:42.066770    4178 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-476000-m03" hosting pod "kube-apiserver-ha-476000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:42.066774    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-476000" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:42.260909    4178 request.go:632] Waited for 194.055971ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-476000
	I0926 17:54:42.260962    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-476000
	I0926 17:54:42.261001    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:42.261021    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:42.261031    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:42.264905    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:42.460758    4178 request.go:632] Waited for 195.327303ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:42.460808    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:42.460816    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:42.460827    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:42.460837    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:42.464434    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:42.464776    4178 pod_ready.go:93] pod "kube-controller-manager-ha-476000" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:42.464786    4178 pod_ready.go:82] duration metric: took 398.004555ms for pod "kube-controller-manager-ha-476000" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:42.464793    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-476000-m02" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:42.660316    4178 request.go:632] Waited for 195.46211ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-476000-m02
	I0926 17:54:42.660458    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-476000-m02
	I0926 17:54:42.660474    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:42.660486    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:42.660494    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:42.665327    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:42.860122    4178 request.go:632] Waited for 194.468161ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:42.860201    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:42.860211    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:42.860222    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:42.860231    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:42.864049    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:42.864456    4178 pod_ready.go:93] pod "kube-controller-manager-ha-476000-m02" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:42.864465    4178 pod_ready.go:82] duration metric: took 399.6655ms for pod "kube-controller-manager-ha-476000-m02" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:42.864473    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-476000-m03" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:43.060814    4178 request.go:632] Waited for 196.258122ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-476000-m03
	I0926 17:54:43.060925    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-476000-m03
	I0926 17:54:43.060935    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:43.060947    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:43.060956    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:43.065088    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:43.261824    4178 request.go:632] Waited for 196.351744ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000-m03
	I0926 17:54:43.261944    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m03
	I0926 17:54:43.261957    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:43.261967    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:43.261984    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:43.266272    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:43.266738    4178 pod_ready.go:98] node "ha-476000-m03" hosting pod "kube-controller-manager-ha-476000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:43.266748    4178 pod_ready.go:82] duration metric: took 402.268136ms for pod "kube-controller-manager-ha-476000-m03" in "kube-system" namespace to be "Ready" ...
	E0926 17:54:43.266762    4178 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-476000-m03" hosting pod "kube-controller-manager-ha-476000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:43.266768    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5d8nb" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:43.460501    4178 request.go:632] Waited for 193.687301ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5d8nb
	I0926 17:54:43.460615    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5d8nb
	I0926 17:54:43.460627    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:43.460639    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:43.460647    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:43.463846    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:43.662152    4178 request.go:632] Waited for 197.799796ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000-m04
	I0926 17:54:43.662296    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m04
	I0926 17:54:43.662312    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:43.662324    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:43.662334    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:43.666430    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:43.666928    4178 pod_ready.go:98] node "ha-476000-m04" hosting pod "kube-proxy-5d8nb" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m04" has status "Ready":"Unknown"
	I0926 17:54:43.666940    4178 pod_ready.go:82] duration metric: took 400.16396ms for pod "kube-proxy-5d8nb" in "kube-system" namespace to be "Ready" ...
	E0926 17:54:43.666946    4178 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-476000-m04" hosting pod "kube-proxy-5d8nb" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m04" has status "Ready":"Unknown"
	I0926 17:54:43.666950    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bpsqv" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:43.860782    4178 request.go:632] Waited for 193.758415ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bpsqv
	I0926 17:54:43.860851    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bpsqv
	I0926 17:54:43.860893    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:43.860905    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:43.860912    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:43.865061    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:44.060850    4178 request.go:632] Waited for 195.218122ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000-m03
	I0926 17:54:44.060902    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m03
	I0926 17:54:44.060920    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:44.060968    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:44.060976    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:44.065008    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:44.065426    4178 pod_ready.go:98] node "ha-476000-m03" hosting pod "kube-proxy-bpsqv" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:44.065437    4178 pod_ready.go:82] duration metric: took 398.480723ms for pod "kube-proxy-bpsqv" in "kube-system" namespace to be "Ready" ...
	E0926 17:54:44.065443    4178 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-476000-m03" hosting pod "kube-proxy-bpsqv" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:44.065448    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ctdh4" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:44.260264    4178 request.go:632] Waited for 194.757329ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ctdh4
	I0926 17:54:44.260395    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ctdh4
	I0926 17:54:44.260404    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:44.260417    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:44.260424    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:44.264668    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:44.461295    4178 request.go:632] Waited for 196.119983ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:44.461373    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:44.461384    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:44.461399    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:44.461407    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:44.465035    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:44.465397    4178 pod_ready.go:93] pod "kube-proxy-ctdh4" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:44.465406    4178 pod_ready.go:82] duration metric: took 399.951689ms for pod "kube-proxy-ctdh4" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:44.465413    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nrsx7" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:44.660616    4178 request.go:632] Waited for 195.1575ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrsx7
	I0926 17:54:44.660704    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrsx7
	I0926 17:54:44.660715    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:44.660726    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:44.660734    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:44.664476    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:44.860447    4178 request.go:632] Waited for 195.571151ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:44.860565    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:44.860578    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:44.860588    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:44.860596    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:44.864038    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:44.864554    4178 pod_ready.go:93] pod "kube-proxy-nrsx7" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:44.864566    4178 pod_ready.go:82] duration metric: took 399.145507ms for pod "kube-proxy-nrsx7" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:44.864575    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-476000" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:45.060924    4178 request.go:632] Waited for 196.301993ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-476000
	I0926 17:54:45.061011    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-476000
	I0926 17:54:45.061022    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:45.061034    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:45.061042    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:45.065277    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:45.260098    4178 request.go:632] Waited for 194.412657ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:45.260187    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:45.260208    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:45.260220    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:45.260229    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:45.264296    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:45.264558    4178 pod_ready.go:93] pod "kube-scheduler-ha-476000" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:45.264567    4178 pod_ready.go:82] duration metric: took 399.984402ms for pod "kube-scheduler-ha-476000" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:45.264574    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-476000-m02" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:45.460204    4178 request.go:632] Waited for 195.586272ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-476000-m02
	I0926 17:54:45.460285    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-476000-m02
	I0926 17:54:45.460296    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:45.460307    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:45.460315    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:45.463717    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:45.661528    4178 request.go:632] Waited for 197.284014ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:45.661624    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:45.661634    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:45.661645    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:45.661653    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:45.666080    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:45.666323    4178 pod_ready.go:93] pod "kube-scheduler-ha-476000-m02" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:45.666333    4178 pod_ready.go:82] duration metric: took 401.752851ms for pod "kube-scheduler-ha-476000-m02" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:45.666340    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-476000-m03" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:45.860703    4178 request.go:632] Waited for 194.311899ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-476000-m03
	I0926 17:54:45.860736    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-476000-m03
	I0926 17:54:45.860740    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:45.860746    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:45.860750    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:45.863521    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:46.061792    4178 request.go:632] Waited for 197.829608ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000-m03
	I0926 17:54:46.061901    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m03
	I0926 17:54:46.061915    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:46.061926    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:46.061934    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:46.065839    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:46.066244    4178 pod_ready.go:98] node "ha-476000-m03" hosting pod "kube-scheduler-ha-476000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:46.066255    4178 pod_ready.go:82] duration metric: took 399.908641ms for pod "kube-scheduler-ha-476000-m03" in "kube-system" namespace to be "Ready" ...
	E0926 17:54:46.066262    4178 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-476000-m03" hosting pod "kube-scheduler-ha-476000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:46.066267    4178 pod_ready.go:39] duration metric: took 41.222971189s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0926 17:54:46.066282    4178 api_server.go:52] waiting for apiserver process to appear ...
	I0926 17:54:46.066375    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 17:54:46.079414    4178 logs.go:276] 2 containers: [7aabe646a9fe 3fae3e334c04]
	I0926 17:54:46.079513    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 17:54:46.092379    4178 logs.go:276] 2 containers: [f525de12dc97 bcf266ec7564]
	I0926 17:54:46.092476    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 17:54:46.105011    4178 logs.go:276] 0 containers: []
	W0926 17:54:46.105025    4178 logs.go:278] No container was found matching "coredns"
	I0926 17:54:46.105107    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 17:54:46.118452    4178 logs.go:276] 2 containers: [d9fc15fd82f1 d8ed18933791]
	I0926 17:54:46.118550    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 17:54:46.132316    4178 logs.go:276] 2 containers: [b52376c9cdcd a5a9a7e18064]
	I0926 17:54:46.132402    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 17:54:46.145649    4178 logs.go:276] 2 containers: [350a07550ad1 f82271bde6b0]
	I0926 17:54:46.145746    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 17:54:46.160399    4178 logs.go:276] 2 containers: [b73a2fda347d 981336ad66c6]
	I0926 17:54:46.160426    4178 logs.go:123] Gathering logs for kube-apiserver [7aabe646a9fe] ...
	I0926 17:54:46.160432    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aabe646a9fe"
	I0926 17:54:46.180676    4178 logs.go:123] Gathering logs for kube-apiserver [3fae3e334c04] ...
	I0926 17:54:46.180690    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fae3e334c04"
	I0926 17:54:46.213941    4178 logs.go:123] Gathering logs for kindnet [981336ad66c6] ...
	I0926 17:54:46.213956    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 981336ad66c6"
	I0926 17:54:46.229008    4178 logs.go:123] Gathering logs for container status ...
	I0926 17:54:46.229022    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 17:54:46.263727    4178 logs.go:123] Gathering logs for dmesg ...
	I0926 17:54:46.263743    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 17:54:46.275216    4178 logs.go:123] Gathering logs for etcd [f525de12dc97] ...
	I0926 17:54:46.275229    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f525de12dc97"
	I0926 17:54:46.340546    4178 logs.go:123] Gathering logs for etcd [bcf266ec7564] ...
	I0926 17:54:46.340563    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcf266ec7564"
	I0926 17:54:46.368786    4178 logs.go:123] Gathering logs for kube-scheduler [d8ed18933791] ...
	I0926 17:54:46.368802    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ed18933791"
	I0926 17:54:46.392911    4178 logs.go:123] Gathering logs for kube-proxy [a5a9a7e18064] ...
	I0926 17:54:46.392926    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5a9a7e18064"
	I0926 17:54:46.411685    4178 logs.go:123] Gathering logs for kubelet ...
	I0926 17:54:46.411700    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 17:54:46.453572    4178 logs.go:123] Gathering logs for describe nodes ...
	I0926 17:54:46.453588    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 17:54:46.819319    4178 logs.go:123] Gathering logs for kube-scheduler [d9fc15fd82f1] ...
	I0926 17:54:46.819338    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9fc15fd82f1"
	I0926 17:54:46.834299    4178 logs.go:123] Gathering logs for kube-proxy [b52376c9cdcd] ...
	I0926 17:54:46.834315    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b52376c9cdcd"
	I0926 17:54:46.850264    4178 logs.go:123] Gathering logs for Docker ...
	I0926 17:54:46.850278    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 17:54:46.881220    4178 logs.go:123] Gathering logs for kube-controller-manager [350a07550ad1] ...
	I0926 17:54:46.881233    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 350a07550ad1"
	I0926 17:54:46.915123    4178 logs.go:123] Gathering logs for kube-controller-manager [f82271bde6b0] ...
	I0926 17:54:46.915139    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f82271bde6b0"
	I0926 17:54:46.943154    4178 logs.go:123] Gathering logs for kindnet [b73a2fda347d] ...
	I0926 17:54:46.943169    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b73a2fda347d"
	I0926 17:54:49.459929    4178 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 17:54:49.472910    4178 api_server.go:72] duration metric: took 1m9.339247453s to wait for apiserver process to appear ...
	I0926 17:54:49.472923    4178 api_server.go:88] waiting for apiserver healthz status ...
	I0926 17:54:49.473016    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 17:54:49.489783    4178 logs.go:276] 2 containers: [7aabe646a9fe 3fae3e334c04]
	I0926 17:54:49.489876    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 17:54:49.503069    4178 logs.go:276] 2 containers: [f525de12dc97 bcf266ec7564]
	I0926 17:54:49.503157    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 17:54:49.514340    4178 logs.go:276] 0 containers: []
	W0926 17:54:49.514353    4178 logs.go:278] No container was found matching "coredns"
	I0926 17:54:49.514430    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 17:54:49.528690    4178 logs.go:276] 2 containers: [d9fc15fd82f1 d8ed18933791]
	I0926 17:54:49.528782    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 17:54:49.540774    4178 logs.go:276] 2 containers: [b52376c9cdcd a5a9a7e18064]
	I0926 17:54:49.540870    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 17:54:49.553605    4178 logs.go:276] 2 containers: [350a07550ad1 f82271bde6b0]
	I0926 17:54:49.553693    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 17:54:49.566939    4178 logs.go:276] 2 containers: [b73a2fda347d 981336ad66c6]
	I0926 17:54:49.566961    4178 logs.go:123] Gathering logs for kindnet [981336ad66c6] ...
	I0926 17:54:49.566967    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 981336ad66c6"
	I0926 17:54:49.584163    4178 logs.go:123] Gathering logs for etcd [bcf266ec7564] ...
	I0926 17:54:49.584179    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcf266ec7564"
	I0926 17:54:49.608092    4178 logs.go:123] Gathering logs for kube-controller-manager [f82271bde6b0] ...
	I0926 17:54:49.608107    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f82271bde6b0"
	I0926 17:54:49.640526    4178 logs.go:123] Gathering logs for etcd [f525de12dc97] ...
	I0926 17:54:49.640542    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f525de12dc97"
	I0926 17:54:49.707920    4178 logs.go:123] Gathering logs for kube-scheduler [d9fc15fd82f1] ...
	I0926 17:54:49.707937    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9fc15fd82f1"
	I0926 17:54:49.725537    4178 logs.go:123] Gathering logs for kube-proxy [b52376c9cdcd] ...
	I0926 17:54:49.725551    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b52376c9cdcd"
	I0926 17:54:49.747118    4178 logs.go:123] Gathering logs for kube-proxy [a5a9a7e18064] ...
	I0926 17:54:49.747134    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5a9a7e18064"
	I0926 17:54:49.763059    4178 logs.go:123] Gathering logs for kindnet [b73a2fda347d] ...
	I0926 17:54:49.763073    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b73a2fda347d"
	I0926 17:54:49.780606    4178 logs.go:123] Gathering logs for container status ...
	I0926 17:54:49.780619    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 17:54:49.815474    4178 logs.go:123] Gathering logs for kubelet ...
	I0926 17:54:49.815490    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 17:54:49.856341    4178 logs.go:123] Gathering logs for kube-apiserver [3fae3e334c04] ...
	I0926 17:54:49.856359    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fae3e334c04"
	I0926 17:54:49.895001    4178 logs.go:123] Gathering logs for kube-apiserver [7aabe646a9fe] ...
	I0926 17:54:49.895016    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aabe646a9fe"
	I0926 17:54:49.915291    4178 logs.go:123] Gathering logs for kube-scheduler [d8ed18933791] ...
	I0926 17:54:49.915307    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ed18933791"
	I0926 17:54:49.931682    4178 logs.go:123] Gathering logs for kube-controller-manager [350a07550ad1] ...
	I0926 17:54:49.931698    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 350a07550ad1"
	I0926 17:54:49.962905    4178 logs.go:123] Gathering logs for Docker ...
	I0926 17:54:49.962920    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 17:54:49.995739    4178 logs.go:123] Gathering logs for dmesg ...
	I0926 17:54:49.995756    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 17:54:50.006748    4178 logs.go:123] Gathering logs for describe nodes ...
	I0926 17:54:50.006764    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 17:54:52.683223    4178 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0926 17:54:52.688111    4178 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0926 17:54:52.688148    4178 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0926 17:54:52.688152    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:52.688158    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:52.688162    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:52.688774    4178 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0926 17:54:52.688866    4178 api_server.go:141] control plane version: v1.31.1
	I0926 17:54:52.688877    4178 api_server.go:131] duration metric: took 3.215937625s to wait for apiserver health ...
	I0926 17:54:52.688882    4178 system_pods.go:43] waiting for kube-system pods to appear ...
	I0926 17:54:52.688964    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 17:54:52.702208    4178 logs.go:276] 2 containers: [7aabe646a9fe 3fae3e334c04]
	I0926 17:54:52.702296    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 17:54:52.716057    4178 logs.go:276] 2 containers: [f525de12dc97 bcf266ec7564]
	I0926 17:54:52.716146    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 17:54:52.730288    4178 logs.go:276] 0 containers: []
	W0926 17:54:52.730303    4178 logs.go:278] No container was found matching "coredns"
	I0926 17:54:52.730387    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 17:54:52.744133    4178 logs.go:276] 2 containers: [d9fc15fd82f1 d8ed18933791]
	I0926 17:54:52.744229    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 17:54:52.757357    4178 logs.go:276] 2 containers: [b52376c9cdcd a5a9a7e18064]
	I0926 17:54:52.757447    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 17:54:52.770397    4178 logs.go:276] 2 containers: [350a07550ad1 f82271bde6b0]
	I0926 17:54:52.770488    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 17:54:52.783588    4178 logs.go:276] 2 containers: [b73a2fda347d 981336ad66c6]
	I0926 17:54:52.783609    4178 logs.go:123] Gathering logs for dmesg ...
	I0926 17:54:52.783615    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 17:54:52.794149    4178 logs.go:123] Gathering logs for kube-scheduler [d9fc15fd82f1] ...
	I0926 17:54:52.794162    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9fc15fd82f1"
	I0926 17:54:52.810239    4178 logs.go:123] Gathering logs for kube-proxy [a5a9a7e18064] ...
	I0926 17:54:52.810253    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5a9a7e18064"
	I0926 17:54:52.828364    4178 logs.go:123] Gathering logs for kube-controller-manager [350a07550ad1] ...
	I0926 17:54:52.828379    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 350a07550ad1"
	I0926 17:54:52.859712    4178 logs.go:123] Gathering logs for kindnet [b73a2fda347d] ...
	I0926 17:54:52.859726    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b73a2fda347d"
	I0926 17:54:52.877881    4178 logs.go:123] Gathering logs for container status ...
	I0926 17:54:52.877898    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 17:54:52.920788    4178 logs.go:123] Gathering logs for kube-scheduler [d8ed18933791] ...
	I0926 17:54:52.920802    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ed18933791"
	I0926 17:54:52.937686    4178 logs.go:123] Gathering logs for Docker ...
	I0926 17:54:52.937700    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 17:54:52.970435    4178 logs.go:123] Gathering logs for kubelet ...
	I0926 17:54:52.970449    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 17:54:53.015652    4178 logs.go:123] Gathering logs for describe nodes ...
	I0926 17:54:53.015669    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 17:54:53.184377    4178 logs.go:123] Gathering logs for etcd [f525de12dc97] ...
	I0926 17:54:53.184391    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f525de12dc97"
	I0926 17:54:53.249067    4178 logs.go:123] Gathering logs for etcd [bcf266ec7564] ...
	I0926 17:54:53.249083    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcf266ec7564"
	I0926 17:54:53.274003    4178 logs.go:123] Gathering logs for kube-controller-manager [f82271bde6b0] ...
	I0926 17:54:53.274019    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f82271bde6b0"
	I0926 17:54:53.300047    4178 logs.go:123] Gathering logs for kube-apiserver [7aabe646a9fe] ...
	I0926 17:54:53.300062    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aabe646a9fe"
	I0926 17:54:53.321481    4178 logs.go:123] Gathering logs for kube-apiserver [3fae3e334c04] ...
	I0926 17:54:53.321495    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fae3e334c04"
	I0926 17:54:53.356023    4178 logs.go:123] Gathering logs for kube-proxy [b52376c9cdcd] ...
	I0926 17:54:53.356038    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b52376c9cdcd"
	I0926 17:54:53.374219    4178 logs.go:123] Gathering logs for kindnet [981336ad66c6] ...
	I0926 17:54:53.374233    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 981336ad66c6"
	I0926 17:54:55.893460    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0926 17:54:55.893486    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:55.893529    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:55.893539    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:55.899854    4178 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0926 17:54:55.904904    4178 system_pods.go:59] 26 kube-system pods found
	I0926 17:54:55.904920    4178 system_pods.go:61] "coredns-7c65d6cfc9-44l9n" [8009053f-fc43-43ba-a44e-fd1d53e88617] Running
	I0926 17:54:55.904925    4178 system_pods.go:61] "coredns-7c65d6cfc9-7jwgv" [20fb38a0-b993-41b3-9c91-98802d75ef47] Running
	I0926 17:54:55.904928    4178 system_pods.go:61] "etcd-ha-476000" [469cfe32-fa29-4816-afd7-3f115a3a6c67] Running
	I0926 17:54:55.904930    4178 system_pods.go:61] "etcd-ha-476000-m02" [d4c3ec0c-a151-4047-9a30-93cae10d24a0] Running
	I0926 17:54:55.904933    4178 system_pods.go:61] "etcd-ha-476000-m03" [e9166a94-af57-49ec-91a3-dc0c36083ca4] Running
	I0926 17:54:55.904936    4178 system_pods.go:61] "kindnet-44vxl" [488a3806-d7c1-4397-bff8-00d9ea3cb984] Running
	I0926 17:54:55.904938    4178 system_pods.go:61] "kindnet-4pnxr" [23b10d5b-fd63-4284-9c44-413b8bf80354] Running
	I0926 17:54:55.904941    4178 system_pods.go:61] "kindnet-hhrtc" [7ac457e3-ae85-4314-99dc-da37f5875807] Running
	I0926 17:54:55.904943    4178 system_pods.go:61] "kindnet-lgj66" [63fc5d71-c403-40d7-85a7-6ff48e307a79] Running
	I0926 17:54:55.904946    4178 system_pods.go:61] "kube-apiserver-ha-476000" [5c5e9fb4-0cda-4892-bae5-e51e839d2573] Running
	I0926 17:54:55.904948    4178 system_pods.go:61] "kube-apiserver-ha-476000-m02" [2d6d0355-152b-4be4-b810-f1c80c9ef24f] Running
	I0926 17:54:55.904951    4178 system_pods.go:61] "kube-apiserver-ha-476000-m03" [3ce79eb3-635c-4632-82e0-7240a7d49c50] Running
	I0926 17:54:55.904954    4178 system_pods.go:61] "kube-controller-manager-ha-476000" [9edef370-886e-4260-a3d3-99be564d90c6] Running
	I0926 17:54:55.904957    4178 system_pods.go:61] "kube-controller-manager-ha-476000-m02" [5eeed951-eaba-436f-9f80-8f68cb50425d] Running
	I0926 17:54:55.904960    4178 system_pods.go:61] "kube-controller-manager-ha-476000-m03" [e60f68a5-a353-4501-81c1-ac7d76820373] Running
	I0926 17:54:55.904962    4178 system_pods.go:61] "kube-proxy-5d8nb" [1e9c0178-2d58-4ca1-9fbe-c8e54d91bf1a] Running
	I0926 17:54:55.904965    4178 system_pods.go:61] "kube-proxy-bpsqv" [c264b112-c037-4332-9c47-2561ecb928f1] Running
	I0926 17:54:55.904967    4178 system_pods.go:61] "kube-proxy-ctdh4" [9a21a19a-5cad-4eb9-97f3-4c654fbbe59b] Running
	I0926 17:54:55.904970    4178 system_pods.go:61] "kube-proxy-nrsx7" [14b031d0-b044-4500-8cc1-1397c83c1886] Running
	I0926 17:54:55.904973    4178 system_pods.go:61] "kube-scheduler-ha-476000" [ef0e308c-1b28-413c-846e-24935489434d] Running
	I0926 17:54:55.904976    4178 system_pods.go:61] "kube-scheduler-ha-476000-m02" [2b75a7bc-a19c-4aeb-8521-10bd963cacbb] Running
	I0926 17:54:55.904978    4178 system_pods.go:61] "kube-scheduler-ha-476000-m03" [206cbf3f-bff1-4164-a622-3be7593c08ac] Running
	I0926 17:54:55.904981    4178 system_pods.go:61] "kube-vip-ha-476000" [376a9128-fe3c-41aa-b26c-da921ae20e68] Running
	I0926 17:54:55.904997    4178 system_pods.go:61] "kube-vip-ha-476000-m02" [7e907e74-7f15-4c04-b463-682a14766e66] Running
	I0926 17:54:55.905002    4178 system_pods.go:61] "kube-vip-ha-476000-m03" [bf2e3b8c-cf42-45b0-a4d7-a32b820bab21] Running
	I0926 17:54:55.905005    4178 system_pods.go:61] "storage-provisioner" [e3e367a7-6cda-4177-a81d-7897333308d7] Running
	I0926 17:54:55.905009    4178 system_pods.go:74] duration metric: took 3.216111125s to wait for pod list to return data ...
	I0926 17:54:55.905015    4178 default_sa.go:34] waiting for default service account to be created ...
	I0926 17:54:55.905062    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0926 17:54:55.905068    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:55.905073    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:55.905077    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:55.907842    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:55.908016    4178 default_sa.go:45] found service account: "default"
	I0926 17:54:55.908026    4178 default_sa.go:55] duration metric: took 3.006211ms for default service account to be created ...
	I0926 17:54:55.908031    4178 system_pods.go:116] waiting for k8s-apps to be running ...
	I0926 17:54:55.908061    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0926 17:54:55.908066    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:55.908071    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:55.908075    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:55.912026    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:55.917054    4178 system_pods.go:86] 26 kube-system pods found
	I0926 17:54:55.917066    4178 system_pods.go:89] "coredns-7c65d6cfc9-44l9n" [8009053f-fc43-43ba-a44e-fd1d53e88617] Running
	I0926 17:54:55.917070    4178 system_pods.go:89] "coredns-7c65d6cfc9-7jwgv" [20fb38a0-b993-41b3-9c91-98802d75ef47] Running
	I0926 17:54:55.917073    4178 system_pods.go:89] "etcd-ha-476000" [469cfe32-fa29-4816-afd7-3f115a3a6c67] Running
	I0926 17:54:55.917076    4178 system_pods.go:89] "etcd-ha-476000-m02" [d4c3ec0c-a151-4047-9a30-93cae10d24a0] Running
	I0926 17:54:55.917080    4178 system_pods.go:89] "etcd-ha-476000-m03" [e9166a94-af57-49ec-91a3-dc0c36083ca4] Running
	I0926 17:54:55.917083    4178 system_pods.go:89] "kindnet-44vxl" [488a3806-d7c1-4397-bff8-00d9ea3cb984] Running
	I0926 17:54:55.917085    4178 system_pods.go:89] "kindnet-4pnxr" [23b10d5b-fd63-4284-9c44-413b8bf80354] Running
	I0926 17:54:55.917088    4178 system_pods.go:89] "kindnet-hhrtc" [7ac457e3-ae85-4314-99dc-da37f5875807] Running
	I0926 17:54:55.917091    4178 system_pods.go:89] "kindnet-lgj66" [63fc5d71-c403-40d7-85a7-6ff48e307a79] Running
	I0926 17:54:55.917094    4178 system_pods.go:89] "kube-apiserver-ha-476000" [5c5e9fb4-0cda-4892-bae5-e51e839d2573] Running
	I0926 17:54:55.917097    4178 system_pods.go:89] "kube-apiserver-ha-476000-m02" [2d6d0355-152b-4be4-b810-f1c80c9ef24f] Running
	I0926 17:54:55.917100    4178 system_pods.go:89] "kube-apiserver-ha-476000-m03" [3ce79eb3-635c-4632-82e0-7240a7d49c50] Running
	I0926 17:54:55.917103    4178 system_pods.go:89] "kube-controller-manager-ha-476000" [9edef370-886e-4260-a3d3-99be564d90c6] Running
	I0926 17:54:55.917106    4178 system_pods.go:89] "kube-controller-manager-ha-476000-m02" [5eeed951-eaba-436f-9f80-8f68cb50425d] Running
	I0926 17:54:55.917110    4178 system_pods.go:89] "kube-controller-manager-ha-476000-m03" [e60f68a5-a353-4501-81c1-ac7d76820373] Running
	I0926 17:54:55.917113    4178 system_pods.go:89] "kube-proxy-5d8nb" [1e9c0178-2d58-4ca1-9fbe-c8e54d91bf1a] Running
	I0926 17:54:55.917116    4178 system_pods.go:89] "kube-proxy-bpsqv" [c264b112-c037-4332-9c47-2561ecb928f1] Running
	I0926 17:54:55.917123    4178 system_pods.go:89] "kube-proxy-ctdh4" [9a21a19a-5cad-4eb9-97f3-4c654fbbe59b] Running
	I0926 17:54:55.917126    4178 system_pods.go:89] "kube-proxy-nrsx7" [14b031d0-b044-4500-8cc1-1397c83c1886] Running
	I0926 17:54:55.917129    4178 system_pods.go:89] "kube-scheduler-ha-476000" [ef0e308c-1b28-413c-846e-24935489434d] Running
	I0926 17:54:55.917132    4178 system_pods.go:89] "kube-scheduler-ha-476000-m02" [2b75a7bc-a19c-4aeb-8521-10bd963cacbb] Running
	I0926 17:54:55.917135    4178 system_pods.go:89] "kube-scheduler-ha-476000-m03" [206cbf3f-bff1-4164-a622-3be7593c08ac] Running
	I0926 17:54:55.917138    4178 system_pods.go:89] "kube-vip-ha-476000" [376a9128-fe3c-41aa-b26c-da921ae20e68] Running
	I0926 17:54:55.917140    4178 system_pods.go:89] "kube-vip-ha-476000-m02" [7e907e74-7f15-4c04-b463-682a14766e66] Running
	I0926 17:54:55.917144    4178 system_pods.go:89] "kube-vip-ha-476000-m03" [bf2e3b8c-cf42-45b0-a4d7-a32b820bab21] Running
	I0926 17:54:55.917146    4178 system_pods.go:89] "storage-provisioner" [e3e367a7-6cda-4177-a81d-7897333308d7] Running
	I0926 17:54:55.917151    4178 system_pods.go:126] duration metric: took 9.116472ms to wait for k8s-apps to be running ...
	I0926 17:54:55.917160    4178 system_svc.go:44] waiting for kubelet service to be running ....
	I0926 17:54:55.917225    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 17:54:55.928854    4178 system_svc.go:56] duration metric: took 11.69353ms WaitForService to wait for kubelet
	I0926 17:54:55.928867    4178 kubeadm.go:582] duration metric: took 1m15.795183486s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 17:54:55.928878    4178 node_conditions.go:102] verifying NodePressure condition ...
	I0926 17:54:55.928918    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0926 17:54:55.928924    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:55.928930    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:55.928933    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:55.932146    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:55.933143    4178 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0926 17:54:55.933159    4178 node_conditions.go:123] node cpu capacity is 2
	I0926 17:54:55.933173    4178 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0926 17:54:55.933176    4178 node_conditions.go:123] node cpu capacity is 2
	I0926 17:54:55.933181    4178 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0926 17:54:55.933183    4178 node_conditions.go:123] node cpu capacity is 2
	I0926 17:54:55.933186    4178 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0926 17:54:55.933190    4178 node_conditions.go:123] node cpu capacity is 2
	I0926 17:54:55.933193    4178 node_conditions.go:105] duration metric: took 4.311525ms to run NodePressure ...
	I0926 17:54:55.933202    4178 start.go:241] waiting for startup goroutines ...
	I0926 17:54:55.933219    4178 start.go:255] writing updated cluster config ...
	I0926 17:54:55.954947    4178 out.go:201] 
	I0926 17:54:55.975717    4178 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:54:55.975787    4178 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/config.json ...
	I0926 17:54:55.997338    4178 out.go:177] * Starting "ha-476000-m03" control-plane node in "ha-476000" cluster
	I0926 17:54:56.055744    4178 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 17:54:56.055778    4178 cache.go:56] Caching tarball of preloaded images
	I0926 17:54:56.056007    4178 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0926 17:54:56.056029    4178 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 17:54:56.056173    4178 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/config.json ...
	I0926 17:54:56.057121    4178 start.go:360] acquireMachinesLock for ha-476000-m03: {Name:mk62b0ec9b6170d1971239573141a625c4a2621e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 17:54:56.057290    4178 start.go:364] duration metric: took 139.967µs to acquireMachinesLock for "ha-476000-m03"
	I0926 17:54:56.057321    4178 start.go:96] Skipping create...Using existing machine configuration
	I0926 17:54:56.057331    4178 fix.go:54] fixHost starting: m03
	I0926 17:54:56.057738    4178 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:54:56.057766    4178 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:54:56.066973    4178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52106
	I0926 17:54:56.067348    4178 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:54:56.067691    4178 main.go:141] libmachine: Using API Version  1
	I0926 17:54:56.067705    4178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:54:56.067918    4178 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:54:56.068036    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	I0926 17:54:56.068122    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetState
	I0926 17:54:56.068201    4178 main.go:141] libmachine: (ha-476000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:54:56.068289    4178 main.go:141] libmachine: (ha-476000-m03) DBG | hyperkit pid from json: 3537
	I0926 17:54:56.069219    4178 main.go:141] libmachine: (ha-476000-m03) DBG | hyperkit pid 3537 missing from process table
	I0926 17:54:56.069237    4178 fix.go:112] recreateIfNeeded on ha-476000-m03: state=Stopped err=<nil>
	I0926 17:54:56.069245    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	W0926 17:54:56.069331    4178 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 17:54:56.090482    4178 out.go:177] * Restarting existing hyperkit VM for "ha-476000-m03" ...
	I0926 17:54:56.132629    4178 main.go:141] libmachine: (ha-476000-m03) Calling .Start
	I0926 17:54:56.132887    4178 main.go:141] libmachine: (ha-476000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:54:56.132957    4178 main.go:141] libmachine: (ha-476000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/hyperkit.pid
	I0926 17:54:56.134746    4178 main.go:141] libmachine: (ha-476000-m03) DBG | hyperkit pid 3537 missing from process table
	I0926 17:54:56.134764    4178 main.go:141] libmachine: (ha-476000-m03) DBG | pid 3537 is in state "Stopped"
	I0926 17:54:56.134782    4178 main.go:141] libmachine: (ha-476000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/hyperkit.pid...
	I0926 17:54:56.135225    4178 main.go:141] libmachine: (ha-476000-m03) DBG | Using UUID 91a51069-a363-4c64-acd8-a07fa14dbb0d
	I0926 17:54:56.162007    4178 main.go:141] libmachine: (ha-476000-m03) DBG | Generated MAC 66:6f:5a:2d:e2:16
	I0926 17:54:56.162027    4178 main.go:141] libmachine: (ha-476000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000
	I0926 17:54:56.162143    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"91a51069-a363-4c64-acd8-a07fa14dbb0d", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003accc0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0926 17:54:56.162181    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"91a51069-a363-4c64-acd8-a07fa14dbb0d", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003accc0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0926 17:54:56.162253    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "91a51069-a363-4c64-acd8-a07fa14dbb0d", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/ha-476000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/bzimage,/Users/jenkins/minikube-integration/19711-1128/.minikube/machine
s/ha-476000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000"}
	I0926 17:54:56.162300    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 91a51069-a363-4c64-acd8-a07fa14dbb0d -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/ha-476000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/bzimage,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000"
	I0926 17:54:56.162312    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0926 17:54:56.163637    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 DEBUG: hyperkit: Pid is 4226
	I0926 17:54:56.164043    4178 main.go:141] libmachine: (ha-476000-m03) DBG | Attempt 0
	I0926 17:54:56.164071    4178 main.go:141] libmachine: (ha-476000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:54:56.164140    4178 main.go:141] libmachine: (ha-476000-m03) DBG | hyperkit pid from json: 4226
	I0926 17:54:56.166126    4178 main.go:141] libmachine: (ha-476000-m03) DBG | Searching for 66:6f:5a:2d:e2:16 in /var/db/dhcpd_leases ...
	I0926 17:54:56.166206    4178 main.go:141] libmachine: (ha-476000-m03) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0926 17:54:56.166235    4178 main.go:141] libmachine: (ha-476000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 17:54:56.166254    4178 main.go:141] libmachine: (ha-476000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 17:54:56.166288    4178 main.go:141] libmachine: (ha-476000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 17:54:56.166308    4178 main.go:141] libmachine: (ha-476000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f7515c}
	I0926 17:54:56.166318    4178 main.go:141] libmachine: (ha-476000-m03) DBG | Found match: 66:6f:5a:2d:e2:16
	I0926 17:54:56.166327    4178 main.go:141] libmachine: (ha-476000-m03) DBG | IP: 192.169.0.7
	I0926 17:54:56.166332    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetConfigRaw
	I0926 17:54:56.166976    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetIP
	I0926 17:54:56.167202    4178 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/config.json ...
	I0926 17:54:56.167675    4178 machine.go:93] provisionDockerMachine start ...
	I0926 17:54:56.167686    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	I0926 17:54:56.167814    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:54:56.167961    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:54:56.168088    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:54:56.168207    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:54:56.168321    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:54:56.168450    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:54:56.168613    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0926 17:54:56.168622    4178 main.go:141] libmachine: About to run SSH command:
	hostname
	I0926 17:54:56.172038    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I0926 17:54:56.180188    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0926 17:54:56.181229    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 17:54:56.181258    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 17:54:56.181274    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 17:54:56.181290    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 17:54:56.563523    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0926 17:54:56.563541    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0926 17:54:56.678338    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 17:54:56.678355    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 17:54:56.678363    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 17:54:56.678373    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 17:54:56.679203    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0926 17:54:56.679212    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0926 17:55:02.300815    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:55:02 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0926 17:55:02.300833    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:55:02 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0926 17:55:02.300855    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:55:02 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0926 17:55:02.325228    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:55:02 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0926 17:55:31.235618    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0926 17:55:31.235633    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetMachineName
	I0926 17:55:31.235773    4178 buildroot.go:166] provisioning hostname "ha-476000-m03"
	I0926 17:55:31.235783    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetMachineName
	I0926 17:55:31.235886    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:31.235992    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:31.236097    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.236189    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.236274    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:31.236414    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:55:31.236550    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0926 17:55:31.236559    4178 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-476000-m03 && echo "ha-476000-m03" | sudo tee /etc/hostname
	I0926 17:55:31.305642    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-476000-m03
	
	I0926 17:55:31.305657    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:31.305790    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:31.305908    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.306006    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.306089    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:31.306235    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:55:31.306383    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0926 17:55:31.306394    4178 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-476000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-476000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-476000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0926 17:55:31.369873    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 17:55:31.369889    4178 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19711-1128/.minikube CaCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19711-1128/.minikube}
	I0926 17:55:31.369903    4178 buildroot.go:174] setting up certificates
	I0926 17:55:31.369909    4178 provision.go:84] configureAuth start
	I0926 17:55:31.369916    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetMachineName
	I0926 17:55:31.370048    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetIP
	I0926 17:55:31.370147    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:31.370234    4178 provision.go:143] copyHostCerts
	I0926 17:55:31.370268    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem
	I0926 17:55:31.370317    4178 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem, removing ...
	I0926 17:55:31.370322    4178 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem
	I0926 17:55:31.370451    4178 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem (1123 bytes)
	I0926 17:55:31.370647    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem
	I0926 17:55:31.370676    4178 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem, removing ...
	I0926 17:55:31.370680    4178 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem
	I0926 17:55:31.370748    4178 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem (1675 bytes)
	I0926 17:55:31.370903    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem
	I0926 17:55:31.370932    4178 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem, removing ...
	I0926 17:55:31.370937    4178 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem
	I0926 17:55:31.371006    4178 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem (1082 bytes)
	I0926 17:55:31.371150    4178 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem org=jenkins.ha-476000-m03 san=[127.0.0.1 192.169.0.7 ha-476000-m03 localhost minikube]
	I0926 17:55:31.544988    4178 provision.go:177] copyRemoteCerts
	I0926 17:55:31.545045    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0926 17:55:31.545059    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:31.545196    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:31.545298    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.545402    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:31.545491    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/id_rsa Username:docker}
	I0926 17:55:31.580851    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0926 17:55:31.580928    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0926 17:55:31.601357    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0926 17:55:31.601440    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0926 17:55:31.621840    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0926 17:55:31.621921    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0926 17:55:31.641722    4178 provision.go:87] duration metric: took 271.803372ms to configureAuth
	I0926 17:55:31.641736    4178 buildroot.go:189] setting minikube options for container-runtime
	I0926 17:55:31.641909    4178 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:55:31.641923    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	I0926 17:55:31.642055    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:31.642148    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:31.642236    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.642329    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.642416    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:31.642531    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:55:31.642652    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0926 17:55:31.642659    4178 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0926 17:55:31.699187    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0926 17:55:31.699200    4178 buildroot.go:70] root file system type: tmpfs
	I0926 17:55:31.699283    4178 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0926 17:55:31.699296    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:31.699424    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:31.699525    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.699630    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.699725    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:31.699863    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:55:31.700007    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0926 17:55:31.700056    4178 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0926 17:55:31.769790    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0926 17:55:31.769808    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:31.769942    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:31.770041    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.770127    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.770216    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:31.770341    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:55:31.770484    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0926 17:55:31.770496    4178 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0926 17:55:33.400017    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0926 17:55:33.400032    4178 machine.go:96] duration metric: took 37.232210795s to provisionDockerMachine
	I0926 17:55:33.400040    4178 start.go:293] postStartSetup for "ha-476000-m03" (driver="hyperkit")
	I0926 17:55:33.400054    4178 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0926 17:55:33.400067    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	I0926 17:55:33.400257    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0926 17:55:33.400271    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:33.400365    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:33.400451    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:33.400540    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:33.400615    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/id_rsa Username:docker}
	I0926 17:55:33.437533    4178 ssh_runner.go:195] Run: cat /etc/os-release
	I0926 17:55:33.440663    4178 info.go:137] Remote host: Buildroot 2023.02.9
	I0926 17:55:33.440673    4178 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1128/.minikube/addons for local assets ...
	I0926 17:55:33.440763    4178 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1128/.minikube/files for local assets ...
	I0926 17:55:33.440901    4178 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> 16792.pem in /etc/ssl/certs
	I0926 17:55:33.440910    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> /etc/ssl/certs/16792.pem
	I0926 17:55:33.441066    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0926 17:55:33.449179    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem --> /etc/ssl/certs/16792.pem (1708 bytes)
	I0926 17:55:33.469328    4178 start.go:296] duration metric: took 69.278399ms for postStartSetup
	I0926 17:55:33.469350    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	I0926 17:55:33.469543    4178 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0926 17:55:33.469556    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:33.469645    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:33.469723    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:33.469812    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:33.469885    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/id_rsa Username:docker}
	I0926 17:55:33.505216    4178 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0926 17:55:33.505294    4178 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0926 17:55:33.540120    4178 fix.go:56] duration metric: took 37.482649135s for fixHost
	I0926 17:55:33.540150    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:33.540287    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:33.540382    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:33.540461    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:33.540540    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:33.540677    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:55:33.540816    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0926 17:55:33.540823    4178 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0926 17:55:33.598810    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727398533.714160628
	
	I0926 17:55:33.598825    4178 fix.go:216] guest clock: 1727398533.714160628
	I0926 17:55:33.598831    4178 fix.go:229] Guest: 2024-09-26 17:55:33.714160628 -0700 PDT Remote: 2024-09-26 17:55:33.540136 -0700 PDT m=+153.107512249 (delta=174.024628ms)
	I0926 17:55:33.598841    4178 fix.go:200] guest clock delta is within tolerance: 174.024628ms
	I0926 17:55:33.598846    4178 start.go:83] releasing machines lock for "ha-476000-m03", held for 37.541403544s
	I0926 17:55:33.598861    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	I0926 17:55:33.598984    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetIP
	I0926 17:55:33.620720    4178 out.go:177] * Found network options:
	I0926 17:55:33.640782    4178 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0926 17:55:33.662722    4178 proxy.go:119] fail to check proxy env: Error ip not in block
	W0926 17:55:33.662755    4178 proxy.go:119] fail to check proxy env: Error ip not in block
	I0926 17:55:33.662789    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	I0926 17:55:33.663752    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	I0926 17:55:33.664030    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	I0926 17:55:33.664220    4178 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0926 17:55:33.664265    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	W0926 17:55:33.664303    4178 proxy.go:119] fail to check proxy env: Error ip not in block
	W0926 17:55:33.664331    4178 proxy.go:119] fail to check proxy env: Error ip not in block
	I0926 17:55:33.664429    4178 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0926 17:55:33.664449    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:33.664488    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:33.664703    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:33.664719    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:33.664903    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:33.664932    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:33.665066    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/id_rsa Username:docker}
	I0926 17:55:33.665091    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:33.665207    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/id_rsa Username:docker}
	W0926 17:55:33.697895    4178 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0926 17:55:33.697966    4178 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0926 17:55:33.748934    4178 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0926 17:55:33.748959    4178 start.go:495] detecting cgroup driver to use...
	I0926 17:55:33.749065    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 17:55:33.765581    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0926 17:55:33.775502    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0926 17:55:33.785025    4178 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0926 17:55:33.785083    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0926 17:55:33.794919    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 17:55:33.804605    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0926 17:55:33.814324    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 17:55:33.824237    4178 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0926 17:55:33.832956    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0926 17:55:33.841773    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0926 17:55:33.851179    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0926 17:55:33.860818    4178 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0926 17:55:33.869929    4178 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0926 17:55:33.870002    4178 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0926 17:55:33.880612    4178 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0926 17:55:33.888804    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:55:33.989453    4178 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0926 17:55:34.008589    4178 start.go:495] detecting cgroup driver to use...
	I0926 17:55:34.008666    4178 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0926 17:55:34.033408    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 17:55:34.045976    4178 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0926 17:55:34.061768    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 17:55:34.072236    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 17:55:34.082936    4178 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0926 17:55:34.101453    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 17:55:34.111855    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 17:55:34.126151    4178 ssh_runner.go:195] Run: which cri-dockerd
	I0926 17:55:34.129207    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0926 17:55:34.136448    4178 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0926 17:55:34.149966    4178 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0926 17:55:34.247760    4178 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0926 17:55:34.364359    4178 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0926 17:55:34.364382    4178 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0926 17:55:34.380269    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:55:34.475811    4178 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0926 17:56:35.519197    4178 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.04314195s)
	I0926 17:56:35.519276    4178 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0926 17:56:35.552893    4178 out.go:201] 
	W0926 17:56:35.574257    4178 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 27 00:55:31 ha-476000-m03 systemd[1]: Starting Docker Application Container Engine...
	Sep 27 00:55:31 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:31.500016553Z" level=info msg="Starting up"
	Sep 27 00:55:31 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:31.500635723Z" level=info msg="containerd not running, starting managed containerd"
	Sep 27 00:55:31 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:31.501585462Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=510
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.515859502Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.530811327Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.530896497Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.530963742Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.530999016Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.531160593Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.531211393Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.531353040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.531394128Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.531431029Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.531461249Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.531611451Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.531854923Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.533401951Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.533446517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.533570107Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.533614884Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.533785548Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.533833312Z" level=info msg="metadata content store policy set" policy=shared
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537372044Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537425387Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537458961Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537519539Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537555242Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537622818Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537842730Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537922428Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537957588Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537987448Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538017362Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538049217Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538078685Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538107984Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538137843Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538167077Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538198997Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538230397Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538266484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538296944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538326105Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538358875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538390741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538420029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538458203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538495889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538528790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538561681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538590379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538618723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538647795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538678724Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538713636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538743343Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538771404Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538879453Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538923135Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538973990Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.539015313Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.539070453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.539103724Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.539133731Z" level=info msg="NRI interface is disabled by configuration."
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.539314481Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.539398768Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.539457208Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.539540620Z" level=info msg="containerd successfully booted in 0.024310s"
	Sep 27 00:55:32 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:32.523809928Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 27 00:55:32 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:32.557923590Z" level=info msg="Loading containers: start."
	Sep 27 00:55:32 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:32.687864975Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 27 00:55:32 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:32.754261548Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 27 00:55:33 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:33.488464069Z" level=info msg="Loading containers: done."
	Sep 27 00:55:33 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:33.495297411Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Sep 27 00:55:33 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:33.495333206Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Sep 27 00:55:33 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:33.495348892Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Sep 27 00:55:33 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:33.495450205Z" level=info msg="Daemon has completed initialization"
	Sep 27 00:55:33 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:33.514076327Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 27 00:55:33 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:33.514159018Z" level=info msg="API listen on [::]:2376"
	Sep 27 00:55:33 ha-476000-m03 systemd[1]: Started Docker Application Container Engine.
	Sep 27 00:55:34 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:34.603579868Z" level=info msg="Processing signal 'terminated'"
	Sep 27 00:55:34 ha-476000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Sep 27 00:55:34 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:34.604826953Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 27 00:55:34 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:34.605154827Z" level=info msg="Daemon shutdown complete"
	Sep 27 00:55:34 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:34.605194895Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 27 00:55:34 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:34.605243671Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 27 00:55:35 ha-476000-m03 systemd[1]: docker.service: Deactivated successfully.
	Sep 27 00:55:35 ha-476000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Sep 27 00:55:35 ha-476000-m03 systemd[1]: Starting Docker Application Container Engine...
	Sep 27 00:55:35 ha-476000-m03 dockerd[1093]: time="2024-09-27T00:55:35.644572631Z" level=info msg="Starting up"
	Sep 27 00:56:35 ha-476000-m03 dockerd[1093]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 27 00:56:35 ha-476000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 27 00:56:35 ha-476000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 27 00:56:35 ha-476000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0926 17:56:35.574334    4178 out.go:270] * 
	W0926 17:56:35.575462    4178 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 17:56:35.658842    4178 out.go:201] 
	
	
	==> Docker <==
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.206048904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.206179384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 00:54:01 ha-476000 cri-dockerd[1422]: time="2024-09-27T00:54:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ded079a0572139d8da280864d2cf23e26a7a74761427fdb6aa8247bf1b618b63/resolv.conf as [nameserver 192.169.0.1]"
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.465946902Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.465995187Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.466006348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.466074171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 00:54:01 ha-476000 cri-dockerd[1422]: time="2024-09-27T00:54:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ef132416f65d445e2be52f1f35d402e4103f11df5abe57373ffacf06538460a2/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 27 00:54:01 ha-476000 cri-dockerd[1422]: time="2024-09-27T00:54:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/82fb727d3b4ab9beb6771fe42b02b13cfa819ec6e94565fc85eb5e44849131dc/resolv.conf as [nameserver 192.169.0.1]"
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.953799067Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.953836835Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.953845219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.953903701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.967774874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.968202742Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.968237276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.968864557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 00:54:32 ha-476000 dockerd[1165]: time="2024-09-27T00:54:32.331720830Z" level=info msg="ignoring event" container=182d3576c4be84b985fdcac8be52d4bc1daa03061cca1c9af1ea1719cc87ef93 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:54:32 ha-476000 dockerd[1171]: time="2024-09-27T00:54:32.332359122Z" level=info msg="shim disconnected" id=182d3576c4be84b985fdcac8be52d4bc1daa03061cca1c9af1ea1719cc87ef93 namespace=moby
	Sep 27 00:54:32 ha-476000 dockerd[1171]: time="2024-09-27T00:54:32.332548493Z" level=warning msg="cleaning up after shim disconnected" id=182d3576c4be84b985fdcac8be52d4bc1daa03061cca1c9af1ea1719cc87ef93 namespace=moby
	Sep 27 00:54:32 ha-476000 dockerd[1171]: time="2024-09-27T00:54:32.332589783Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 27 00:54:47 ha-476000 dockerd[1171]: time="2024-09-27T00:54:47.288497270Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 27 00:54:47 ha-476000 dockerd[1171]: time="2024-09-27T00:54:47.289077983Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 27 00:54:47 ha-476000 dockerd[1171]: time="2024-09-27T00:54:47.289196082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 00:54:47 ha-476000 dockerd[1171]: time="2024-09-27T00:54:47.289608100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	b05b1fc6dccd2       6e38f40d628db                                                                                         About a minute ago   Running             storage-provisioner       2                   82fb727d3b4ab       storage-provisioner
	182d3576c4be8       6e38f40d628db                                                                                         2 minutes ago        Exited              storage-provisioner       1                   82fb727d3b4ab       storage-provisioner
	1e068209398d4       8c811b4aec35f                                                                                         2 minutes ago        Running             busybox                   1                   ef132416f65d4       busybox-7dff88458-bvjrf
	3ab08f3aed771       60c005f310ff3                                                                                         2 minutes ago        Running             kube-proxy                1                   ded079a057213       kube-proxy-nrsx7
	13b4ae2edced3       12968670680f4                                                                                         2 minutes ago        Running             kindnet-cni               1                   aedbce80ab870       kindnet-lgj66
	bd209bf19cc97       c69fa2e9cbf5f                                                                                         2 minutes ago        Running             coredns                   1                   78def8c2a71e9       coredns-7c65d6cfc9-7jwgv
	fa6222acd1314       c69fa2e9cbf5f                                                                                         2 minutes ago        Running             coredns                   1                   c557d11d235a0       coredns-7c65d6cfc9-44l9n
	87e465b7b95f5       6bab7719df100                                                                                         2 minutes ago        Running             kube-apiserver            2                   84bf5bfc1db95       kube-apiserver-ha-476000
	01c5e9b4fab08       175ffd71cce3d                                                                                         2 minutes ago        Running             kube-controller-manager   2                   7a8e5df4a06d2       kube-controller-manager-ha-476000
	e50b7f6d45d34       38af8ddebf499                                                                                         3 minutes ago        Running             kube-vip                  0                   9ff0bf9fa82a1       kube-vip-ha-476000
	e923cc80604d7       9aa1fad941575                                                                                         3 minutes ago        Running             kube-scheduler            1                   14ddb9d9f440b       kube-scheduler-ha-476000
	89ad0e203b827       2e96e5913fc06                                                                                         3 minutes ago        Running             etcd                      1                   28300cd77661a       etcd-ha-476000
	d6683f4746762       6bab7719df100                                                                                         3 minutes ago        Exited              kube-apiserver            1                   84bf5bfc1db95       kube-apiserver-ha-476000
	06a5f950d0b27       175ffd71cce3d                                                                                         3 minutes ago        Exited              kube-controller-manager   1                   7a8e5df4a06d2       kube-controller-manager-ha-476000
	0fe8d9cd2d8d2       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   11 minutes ago       Exited              busybox                   0                   58dc7b4f775bb       busybox-7dff88458-bvjrf
	6e7030dd2319d       c69fa2e9cbf5f                                                                                         13 minutes ago       Exited              coredns                   0                   19d1dd5324d2b       coredns-7c65d6cfc9-7jwgv
	325909e950c7b       c69fa2e9cbf5f                                                                                         13 minutes ago       Exited              coredns                   0                   4de17e21e7a0f       coredns-7c65d6cfc9-44l9n
	730d4ab163e72       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              13 minutes ago       Exited              kindnet-cni               0                   30119aa4fc19b       kindnet-lgj66
	2d1ef1d1af27d       60c005f310ff3                                                                                         13 minutes ago       Exited              kube-proxy                0                   581372b45e58a       kube-proxy-nrsx7
	8b01a83a0b098       9aa1fad941575                                                                                         14 minutes ago       Exited              kube-scheduler            0                   c0232eed71fc3       kube-scheduler-ha-476000
	c08f45a78a8ec       2e96e5913fc06                                                                                         14 minutes ago       Exited              etcd                      0                   ff9ea0993276b       etcd-ha-476000
	
	
	==> coredns [325909e950c7] <==
	[INFO] 10.244.0.4:41413 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000172004s
	[INFO] 10.244.0.4:39923 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000145289s
	[INFO] 10.244.0.4:55894 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000153357s
	[INFO] 10.244.0.4:52696 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000059737s
	[INFO] 10.244.1.2:45922 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00008915s
	[INFO] 10.244.1.2:44828 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000111301s
	[INFO] 10.244.1.2:53232 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000116513s
	[INFO] 10.244.2.2:38669 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109219s
	[INFO] 10.244.2.2:51776 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000069559s
	[INFO] 10.244.2.2:34317 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000136009s
	[INFO] 10.244.2.2:35638 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001211s
	[INFO] 10.244.2.2:51345 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075754s
	[INFO] 10.244.0.4:53603 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110008s
	[INFO] 10.244.0.4:48703 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000116941s
	[INFO] 10.244.1.2:60563 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000101753s
	[INFO] 10.244.1.2:40746 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000119902s
	[INFO] 10.244.2.2:38053 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000094376s
	[INFO] 10.244.2.2:51713 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000069296s
	[INFO] 10.244.0.4:32805 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00008605s
	[INFO] 10.244.0.4:44664 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000292333s
	[INFO] 10.244.1.2:33360 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000078243s
	[INFO] 10.244.2.2:36409 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159318s
	[INFO] 10.244.2.2:36868 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000094303s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [6e7030dd2319] <==
	[INFO] 10.244.0.4:56870 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000085932s
	[INFO] 10.244.0.4:42671 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000180223s
	[INFO] 10.244.1.2:48098 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102353s
	[INFO] 10.244.1.2:56626 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.00009538s
	[INFO] 10.244.1.2:45195 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000135305s
	[INFO] 10.244.1.2:57387 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000073744s
	[INFO] 10.244.1.2:56567 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000045328s
	[INFO] 10.244.2.2:40253 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000077683s
	[INFO] 10.244.2.2:49008 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000110327s
	[INFO] 10.244.2.2:54182 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000061031s
	[INFO] 10.244.0.4:53519 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000087904s
	[INFO] 10.244.0.4:37380 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000132535s
	[INFO] 10.244.1.2:33397 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000128623s
	[INFO] 10.244.1.2:35879 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00014214s
	[INFO] 10.244.2.2:39230 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133513s
	[INFO] 10.244.2.2:47654 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000054424s
	[INFO] 10.244.0.4:59796 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00007443s
	[INFO] 10.244.0.4:49766 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000103812s
	[INFO] 10.244.1.2:36226 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102458s
	[INFO] 10.244.1.2:35698 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00010282s
	[INFO] 10.244.1.2:40757 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000066548s
	[INFO] 10.244.2.2:44488 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000148719s
	[INFO] 10.244.2.2:40024 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000069743s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [bd209bf19cc9] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:43213 - 10525 "HINFO IN 4125844120146388069.4027558012888257277. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.0104908s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1432599962]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Sep-2024 00:54:01.650) (total time: 30002ms):
	Trace[1432599962]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (00:54:31.653)
	Trace[1432599962]: [30.002427557s] [30.002427557s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[417897734]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Sep-2024 00:54:01.652) (total time: 30002ms):
	Trace[417897734]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (00:54:31.654)
	Trace[417897734]: [30.002368442s] [30.002368442s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1861937109]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Sep-2024 00:54:01.653) (total time: 30001ms):
	Trace[1861937109]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (00:54:31.654)
	Trace[1861937109]: [30.001494446s] [30.001494446s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [fa6222acd131] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:35702 - 33029 "HINFO IN 8241224091513256990.6666502665085127686. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009680676s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1899858293]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Sep-2024 00:54:01.665) (total time: 30001ms):
	Trace[1899858293]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (00:54:31.666)
	Trace[1899858293]: [30.001480741s] [30.001480741s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1985679635]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Sep-2024 00:54:01.669) (total time: 30000ms):
	Trace[1985679635]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (00:54:31.669)
	Trace[1985679635]: [30.000934597s] [30.000934597s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[345146888]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Sep-2024 00:54:01.669) (total time: 30003ms):
	Trace[345146888]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (00:54:31.673)
	Trace[345146888]: [30.003771613s] [30.003771613s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-476000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-476000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=ha-476000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_26T17_42_35_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 00:42:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-476000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 00:56:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 00:53:57 +0000   Fri, 27 Sep 2024 00:42:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 00:53:57 +0000   Fri, 27 Sep 2024 00:42:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 00:53:57 +0000   Fri, 27 Sep 2024 00:42:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 00:53:57 +0000   Fri, 27 Sep 2024 00:42:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-476000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 24c18e25f36040298bb96a7a31469c55
	  System UUID:                99cf4d4f-0000-0000-a72a-447af4e3b1db
	  Boot ID:                    8cf1f24c-8c01-4381-8f8f-6e75f77e6648
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-bvjrf              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-44l9n             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 coredns-7c65d6cfc9-7jwgv             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 etcd-ha-476000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kindnet-lgj66                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-476000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-476000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-nrsx7                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-476000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-476000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m34s                  kube-proxy       
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-476000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node ha-476000 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-476000 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     14m                    kubelet          Node ha-476000 status is now: NodeHasSufficientPID
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m                    kubelet          Node ha-476000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                    kubelet          Node ha-476000 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           14m                    node-controller  Node ha-476000 event: Registered Node ha-476000 in Controller
	  Normal  NodeReady                13m                    kubelet          Node ha-476000 status is now: NodeReady
	  Normal  RegisteredNode           12m                    node-controller  Node ha-476000 event: Registered Node ha-476000 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-476000 event: Registered Node ha-476000 in Controller
	  Normal  RegisteredNode           9m30s                  node-controller  Node ha-476000 event: Registered Node ha-476000 in Controller
	  Normal  Starting                 3m18s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m18s (x8 over 3m18s)  kubelet          Node ha-476000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m18s (x8 over 3m18s)  kubelet          Node ha-476000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m18s (x7 over 3m18s)  kubelet          Node ha-476000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m45s                  node-controller  Node ha-476000 event: Registered Node ha-476000 in Controller
	  Normal  RegisteredNode           2m31s                  node-controller  Node ha-476000 event: Registered Node ha-476000 in Controller
	
	
	Name:               ha-476000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-476000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=ha-476000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_26T17_43_38_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 00:43:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-476000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 00:56:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 00:54:04 +0000   Fri, 27 Sep 2024 00:43:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 00:54:04 +0000   Fri, 27 Sep 2024 00:43:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 00:54:04 +0000   Fri, 27 Sep 2024 00:43:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 00:54:04 +0000   Fri, 27 Sep 2024 00:54:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-476000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 35bc971223ac4e939cad535ac89bc725
	  System UUID:                58f4445b-0000-0000-bae0-ab27a7b8106e
	  Boot ID:                    7dcb1bbe-ca7a-45f1-9dd9-dc673285b7e4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-gvp8q                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-476000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-hhrtc                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-476000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-476000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-ctdh4                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-476000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-476000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 2m17s                  kube-proxy       
	  Normal   Starting                 9m34s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node ha-476000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-476000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node ha-476000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12m                    node-controller  Node ha-476000-m02 event: Registered Node ha-476000-m02 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-476000-m02 event: Registered Node ha-476000-m02 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-476000-m02 event: Registered Node ha-476000-m02 in Controller
	  Normal   NodeAllocatableEnforced  9m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 9m38s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  9m38s                  kubelet          Node ha-476000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m38s                  kubelet          Node ha-476000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m38s                  kubelet          Node ha-476000-m02 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 9m38s                  kubelet          Node ha-476000-m02 has been rebooted, boot id: 993826c6-3fde-4076-a7cb-33cc19f1b1bc
	  Normal   RegisteredNode           9m30s                  node-controller  Node ha-476000-m02 event: Registered Node ha-476000-m02 in Controller
	  Normal   NodeHasNoDiskPressure    2m57s (x8 over 2m57s)  kubelet          Node ha-476000-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 2m57s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  2m57s (x8 over 2m57s)  kubelet          Node ha-476000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m57s (x7 over 2m57s)  kubelet          Node ha-476000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  2m57s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           2m45s                  node-controller  Node ha-476000-m02 event: Registered Node ha-476000-m02 in Controller
	  Normal   RegisteredNode           2m31s                  node-controller  Node ha-476000-m02 event: Registered Node ha-476000-m02 in Controller
	
	
	Name:               ha-476000-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-476000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=ha-476000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_26T17_44_54_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 00:44:51 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-476000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 00:47:14 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 27 Sep 2024 00:45:53 +0000   Fri, 27 Sep 2024 00:54:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 27 Sep 2024 00:45:53 +0000   Fri, 27 Sep 2024 00:54:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 27 Sep 2024 00:45:53 +0000   Fri, 27 Sep 2024 00:54:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 27 Sep 2024 00:45:53 +0000   Fri, 27 Sep 2024 00:54:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.7
	  Hostname:    ha-476000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 365f6a31a3d140dba5c1be3b08da7ad2
	  System UUID:                91a54c64-0000-0000-acd8-a07fa14dbb0d
	  Boot ID:                    4ca02f6d-4375-4909-8877-3e005809b499
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-jgndj                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-476000-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-4pnxr                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-ha-476000-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-476000-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-bpsqv                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-476000-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-476000-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-476000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-476000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-476000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                node-controller  Node ha-476000-m03 event: Registered Node ha-476000-m03 in Controller
	  Normal  RegisteredNode           11m                node-controller  Node ha-476000-m03 event: Registered Node ha-476000-m03 in Controller
	  Normal  RegisteredNode           11m                node-controller  Node ha-476000-m03 event: Registered Node ha-476000-m03 in Controller
	  Normal  RegisteredNode           9m30s              node-controller  Node ha-476000-m03 event: Registered Node ha-476000-m03 in Controller
	  Normal  RegisteredNode           2m45s              node-controller  Node ha-476000-m03 event: Registered Node ha-476000-m03 in Controller
	  Normal  RegisteredNode           2m31s              node-controller  Node ha-476000-m03 event: Registered Node ha-476000-m03 in Controller
	  Normal  NodeNotReady             2m5s               node-controller  Node ha-476000-m03 status is now: NodeNotReady
	
	
	Name:               ha-476000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-476000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=ha-476000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_26T17_45_52_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 00:45:52 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-476000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 00:47:13 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 27 Sep 2024 00:46:22 +0000   Fri, 27 Sep 2024 00:54:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 27 Sep 2024 00:46:22 +0000   Fri, 27 Sep 2024 00:54:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 27 Sep 2024 00:46:22 +0000   Fri, 27 Sep 2024 00:54:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 27 Sep 2024 00:46:22 +0000   Fri, 27 Sep 2024 00:54:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-476000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 7bdc03e4e33a47a0a7d85ecb664669d4
	  System UUID:                dcce4501-0000-0000-a378-25a085ede049
	  Boot ID:                    b0d71ae5-8550-430a-94b7-e146e65fc279
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-44vxl       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-5d8nb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-476000-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-476000-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-476000-m04 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           10m                node-controller  Node ha-476000-m04 event: Registered Node ha-476000-m04 in Controller
	  Normal  RegisteredNode           10m                node-controller  Node ha-476000-m04 event: Registered Node ha-476000-m04 in Controller
	  Normal  RegisteredNode           10m                node-controller  Node ha-476000-m04 event: Registered Node ha-476000-m04 in Controller
	  Normal  NodeReady                10m                kubelet          Node ha-476000-m04 status is now: NodeReady
	  Normal  RegisteredNode           9m30s              node-controller  Node ha-476000-m04 event: Registered Node ha-476000-m04 in Controller
	  Normal  RegisteredNode           2m45s              node-controller  Node ha-476000-m04 event: Registered Node ha-476000-m04 in Controller
	  Normal  RegisteredNode           2m31s              node-controller  Node ha-476000-m04 event: Registered Node ha-476000-m04 in Controller
	  Normal  NodeNotReady             2m5s               node-controller  Node ha-476000-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.036532] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.006931] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.697129] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000003] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006778] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.775372] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.244387] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.695216] systemd-fstab-generator[461]: Ignoring "noauto" option for root device
	[  +0.101404] systemd-fstab-generator[473]: Ignoring "noauto" option for root device
	[  +1.958371] systemd-fstab-generator[1093]: Ignoring "noauto" option for root device
	[  +0.251045] systemd-fstab-generator[1131]: Ignoring "noauto" option for root device
	[  +0.050021] kauditd_printk_skb: 101 callbacks suppressed
	[  +0.047173] systemd-fstab-generator[1143]: Ignoring "noauto" option for root device
	[  +0.112931] systemd-fstab-generator[1157]: Ignoring "noauto" option for root device
	[  +2.468376] systemd-fstab-generator[1375]: Ignoring "noauto" option for root device
	[  +0.117710] systemd-fstab-generator[1387]: Ignoring "noauto" option for root device
	[  +0.113441] systemd-fstab-generator[1399]: Ignoring "noauto" option for root device
	[  +0.129593] systemd-fstab-generator[1414]: Ignoring "noauto" option for root device
	[  +0.427728] systemd-fstab-generator[1574]: Ignoring "noauto" option for root device
	[  +6.920294] kauditd_printk_skb: 212 callbacks suppressed
	[ +21.597968] kauditd_printk_skb: 40 callbacks suppressed
	[Sep27 00:54] kauditd_printk_skb: 94 callbacks suppressed
	
	
	==> etcd [89ad0e203b82] <==
	{"level":"warn","ts":"2024-09-27T00:55:36.539327Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:55:41.539711Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:55:41.539753Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:55:46.540673Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:55:46.541012Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:55:51.540995Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:55:51.541410Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:55:56.541854Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:55:56.541895Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:56:01.543083Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:56:01.543179Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:56:06.543927Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:56:06.543948Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:56:11.545083Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:56:11.545205Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:56:16.546548Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:56:16.546812Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:56:21.547452Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:56:21.547479Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:56:26.548475Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:56:26.548565Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:56:31.549392Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:56:31.549456Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:56:36.549771Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:56:36.549785Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	
	
	==> etcd [c08f45a78a8e] <==
	{"level":"warn","ts":"2024-09-27T00:47:41.542035Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-27T00:47:33.744957Z","time spent":"7.797074842s","remote":"127.0.0.1:40790","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":0,"response size":0,"request content":"key:\"/registry/networkpolicies/\" range_end:\"/registry/networkpolicies0\" count_only:true "}
	2024/09/27 00:47:41 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-27T00:47:41.542079Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"299.225057ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-09-27T00:47:41.542107Z","caller":"traceutil/trace.go:171","msg":"trace[2123825160] range","detail":"{range_begin:/registry/health; range_end:; }","duration":"299.252922ms","start":"2024-09-27T00:47:41.242851Z","end":"2024-09-27T00:47:41.542104Z","steps":["trace[2123825160] 'agreement among raft nodes before linearized reading'  (duration: 299.224906ms)"],"step_count":1}
	2024/09/27 00:47:41 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-27T00:47:41.593990Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-27T00:47:41.594018Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-27T00:47:41.602616Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-27T00:47:41.604582Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"c11bfa8a53277726"}
	{"level":"info","ts":"2024-09-27T00:47:41.604604Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"c11bfa8a53277726"}
	{"level":"info","ts":"2024-09-27T00:47:41.604619Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"c11bfa8a53277726"}
	{"level":"info","ts":"2024-09-27T00:47:41.604716Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c11bfa8a53277726"}
	{"level":"info","ts":"2024-09-27T00:47:41.604762Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c11bfa8a53277726"}
	{"level":"info","ts":"2024-09-27T00:47:41.604790Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c11bfa8a53277726"}
	{"level":"info","ts":"2024-09-27T00:47:41.604798Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"c11bfa8a53277726"}
	{"level":"info","ts":"2024-09-27T00:47:41.604802Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"a5e8f6083b0b81f2"}
	{"level":"info","ts":"2024-09-27T00:47:41.604809Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"a5e8f6083b0b81f2"}
	{"level":"info","ts":"2024-09-27T00:47:41.604819Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"a5e8f6083b0b81f2"}
	{"level":"info","ts":"2024-09-27T00:47:41.605484Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"a5e8f6083b0b81f2"}
	{"level":"info","ts":"2024-09-27T00:47:41.605507Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"a5e8f6083b0b81f2"}
	{"level":"info","ts":"2024-09-27T00:47:41.605546Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"a5e8f6083b0b81f2"}
	{"level":"info","ts":"2024-09-27T00:47:41.605556Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"a5e8f6083b0b81f2"}
	{"level":"info","ts":"2024-09-27T00:47:41.607550Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-09-27T00:47:41.607595Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-09-27T00:47:41.607615Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-476000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> kernel <==
	 00:56:38 up 3 min,  0 users,  load average: 0.41, 0.37, 0.16
	Linux ha-476000 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [13b4ae2edced] <==
	I0927 00:56:02.486610       1 main.go:299] handling current node
	I0927 00:56:12.491929       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0927 00:56:12.492028       1 main.go:299] handling current node
	I0927 00:56:12.492057       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0927 00:56:12.492172       1 main.go:322] Node ha-476000-m02 has CIDR [10.244.1.0/24] 
	I0927 00:56:12.492345       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0927 00:56:12.492420       1 main.go:322] Node ha-476000-m03 has CIDR [10.244.2.0/24] 
	I0927 00:56:12.492592       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0927 00:56:12.492701       1 main.go:322] Node ha-476000-m04 has CIDR [10.244.3.0/24] 
	I0927 00:56:22.489348       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0927 00:56:22.489709       1 main.go:322] Node ha-476000-m03 has CIDR [10.244.2.0/24] 
	I0927 00:56:22.490058       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0927 00:56:22.490139       1 main.go:322] Node ha-476000-m04 has CIDR [10.244.3.0/24] 
	I0927 00:56:22.490264       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0927 00:56:22.490346       1 main.go:299] handling current node
	I0927 00:56:22.490376       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0927 00:56:22.490394       1 main.go:322] Node ha-476000-m02 has CIDR [10.244.1.0/24] 
	I0927 00:56:32.491793       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0927 00:56:32.491867       1 main.go:322] Node ha-476000-m03 has CIDR [10.244.2.0/24] 
	I0927 00:56:32.491992       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0927 00:56:32.492035       1 main.go:322] Node ha-476000-m04 has CIDR [10.244.3.0/24] 
	I0927 00:56:32.492099       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0927 00:56:32.492142       1 main.go:299] handling current node
	I0927 00:56:32.492170       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0927 00:56:32.492208       1 main.go:322] Node ha-476000-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [730d4ab163e7] <==
	I0927 00:47:03.705461       1 main.go:322] Node ha-476000-m04 has CIDR [10.244.3.0/24] 
	I0927 00:47:13.713791       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0927 00:47:13.713985       1 main.go:299] handling current node
	I0927 00:47:13.714102       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0927 00:47:13.714214       1 main.go:322] Node ha-476000-m02 has CIDR [10.244.1.0/24] 
	I0927 00:47:13.714414       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0927 00:47:13.714545       1 main.go:322] Node ha-476000-m03 has CIDR [10.244.2.0/24] 
	I0927 00:47:13.714946       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0927 00:47:13.715065       1 main.go:322] Node ha-476000-m04 has CIDR [10.244.3.0/24] 
	I0927 00:47:23.710748       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0927 00:47:23.710778       1 main.go:322] Node ha-476000-m02 has CIDR [10.244.1.0/24] 
	I0927 00:47:23.710966       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0927 00:47:23.711202       1 main.go:322] Node ha-476000-m03 has CIDR [10.244.2.0/24] 
	I0927 00:47:23.711295       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0927 00:47:23.711303       1 main.go:322] Node ha-476000-m04 has CIDR [10.244.3.0/24] 
	I0927 00:47:23.711508       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0927 00:47:23.711595       1 main.go:299] handling current node
	I0927 00:47:33.704824       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0927 00:47:33.704897       1 main.go:322] Node ha-476000-m02 has CIDR [10.244.1.0/24] 
	I0927 00:47:33.705242       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0927 00:47:33.705307       1 main.go:322] Node ha-476000-m03 has CIDR [10.244.2.0/24] 
	I0927 00:47:33.705486       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0927 00:47:33.705818       1 main.go:322] Node ha-476000-m04 has CIDR [10.244.3.0/24] 
	I0927 00:47:33.705995       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0927 00:47:33.706008       1 main.go:299] handling current node
	
	
	==> kube-apiserver [87e465b7b95f] <==
	I0927 00:54:02.884947       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0927 00:54:02.884955       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0927 00:54:02.943365       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0927 00:54:02.943570       1 policy_source.go:224] refreshing policies
	I0927 00:54:02.949648       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0927 00:54:02.975777       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0927 00:54:02.975897       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0927 00:54:02.975835       1 shared_informer.go:320] Caches are synced for configmaps
	I0927 00:54:02.976591       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0927 00:54:02.977323       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0927 00:54:02.977419       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0927 00:54:02.977565       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0927 00:54:02.982008       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0927 00:54:02.982182       1 shared_informer.go:320] Caches are synced for node_authorizer
	W0927 00:54:02.987432       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.6]
	I0927 00:54:02.987619       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0927 00:54:02.987707       1 aggregator.go:171] initial CRD sync complete...
	I0927 00:54:02.987750       1 autoregister_controller.go:144] Starting autoregister controller
	I0927 00:54:02.987857       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0927 00:54:02.987898       1 cache.go:39] Caches are synced for autoregister controller
	I0927 00:54:02.988709       1 controller.go:615] quota admission added evaluator for: endpoints
	I0927 00:54:02.993982       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0927 00:54:02.997126       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0927 00:54:03.884450       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0927 00:54:04.211694       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5 192.169.0.6]
	
	
	==> kube-apiserver [d6683f474676] <==
	I0927 00:53:26.693239       1 options.go:228] external host was not specified, using 192.169.0.5
	I0927 00:53:26.695952       1 server.go:142] Version: v1.31.1
	I0927 00:53:26.696173       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 00:53:27.299904       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0927 00:53:27.320033       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0927 00:53:27.330041       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0927 00:53:27.330098       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0927 00:53:27.332141       1 instance.go:232] Using reconciler: lease
	W0927 00:53:47.293920       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0927 00:53:47.294149       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0927 00:53:47.333433       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [01c5e9b4fab0] <==
	I0927 00:54:06.445126       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0927 00:54:06.447687       1 shared_informer.go:320] Caches are synced for resource quota
	I0927 00:54:06.473417       1 shared_informer.go:320] Caches are synced for daemon sets
	I0927 00:54:06.496437       1 shared_informer.go:320] Caches are synced for resource quota
	I0927 00:54:06.921734       1 shared_informer.go:320] Caches are synced for garbage collector
	I0927 00:54:06.972377       1 shared_informer.go:320] Caches are synced for garbage collector
	I0927 00:54:06.972441       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0927 00:54:07.185942       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.202µs"
	I0927 00:54:09.276645       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="61.828631ms"
	I0927 00:54:09.276726       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="27.067µs"
	I0927 00:54:32.998333       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-476000-m04"
	I0927 00:54:32.998470       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-476000-m03"
	I0927 00:54:33.020582       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-476000-m03"
	I0927 00:54:33.020882       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-476000-m04"
	I0927 00:54:33.070337       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="7.029804ms"
	I0927 00:54:33.070565       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="96.493µs"
	I0927 00:54:36.474604       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-476000-m03"
	I0927 00:54:38.190557       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-476000-m03"
	I0927 00:54:40.584626       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-h7qwt EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-h7qwt\": the object has been modified; please apply your changes to the latest version and try again"
	I0927 00:54:40.585022       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"3537638a-d8ae-4b35-b930-21aeb412efa9", APIVersion:"v1", ResourceVersion:"270", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-h7qwt EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-h7qwt": the object has been modified; please apply your changes to the latest version and try again
	I0927 00:54:40.589666       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="48.410037ms"
	I0927 00:54:40.614904       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="25.040724ms"
	I0927 00:54:40.615187       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="45.324µs"
	I0927 00:54:46.573579       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-476000-m04"
	I0927 00:54:48.277366       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-476000-m04"
	
	
	==> kube-controller-manager [06a5f950d0b2] <==
	I0927 00:53:27.325939       1 serving.go:386] Generated self-signed cert in-memory
	I0927 00:53:28.243164       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0927 00:53:28.243279       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 00:53:28.245422       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0927 00:53:28.245777       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0927 00:53:28.245999       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0927 00:53:28.246030       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0927 00:53:48.339070       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.169.0.5:8443/healthz\": dial tcp 192.169.0.5:8443: connect: connection refused"
	
	
	==> kube-proxy [2d1ef1d1af27] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0927 00:42:39.294950       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0927 00:42:39.305827       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0927 00:42:39.314387       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 00:42:39.360026       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0927 00:42:39.360068       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0927 00:42:39.360085       1 server_linux.go:169] "Using iptables Proxier"
	I0927 00:42:39.362140       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 00:42:39.362382       1 server.go:483] "Version info" version="v1.31.1"
	I0927 00:42:39.362411       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 00:42:39.365397       1 config.go:199] "Starting service config controller"
	I0927 00:42:39.365470       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 00:42:39.365636       1 config.go:105] "Starting endpoint slice config controller"
	I0927 00:42:39.365692       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 00:42:39.366725       1 config.go:328] "Starting node config controller"
	I0927 00:42:39.366799       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 00:42:39.466084       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0927 00:42:39.466107       1 shared_informer.go:320] Caches are synced for service config
	I0927 00:42:39.468057       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [3ab08f3aed77] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0927 00:54:02.572463       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0927 00:54:02.595215       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0927 00:54:02.595477       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 00:54:02.710300       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0927 00:54:02.710322       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0927 00:54:02.710339       1 server_linux.go:169] "Using iptables Proxier"
	I0927 00:54:02.714167       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 00:54:02.715628       1 server.go:483] "Version info" version="v1.31.1"
	I0927 00:54:02.715707       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 00:54:02.718471       1 config.go:199] "Starting service config controller"
	I0927 00:54:02.719333       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 00:54:02.719741       1 config.go:105] "Starting endpoint slice config controller"
	I0927 00:54:02.719810       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 00:54:02.721272       1 config.go:328] "Starting node config controller"
	I0927 00:54:02.721390       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 00:54:02.820358       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0927 00:54:02.820547       1 shared_informer.go:320] Caches are synced for service config
	I0927 00:54:02.824323       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8b01a83a0b09] <==
	E0927 00:45:52.380874       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-mm66p\": pod kube-proxy-mm66p is already assigned to node \"ha-476000-m04\"" pod="kube-system/kube-proxy-mm66p"
	E0927 00:45:52.381463       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-44vxl\": pod kindnet-44vxl is already assigned to node \"ha-476000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-44vxl" node="ha-476000-m04"
	E0927 00:45:52.381533       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 488a3806-d7c1-4397-bff8-00d9ea3cb984(kube-system/kindnet-44vxl) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-44vxl"
	E0927 00:45:52.381617       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-44vxl\": pod kindnet-44vxl is already assigned to node \"ha-476000-m04\"" pod="kube-system/kindnet-44vxl"
	I0927 00:45:52.381654       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-44vxl" node="ha-476000-m04"
	E0927 00:45:52.382881       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-gtnxm\": pod kindnet-gtnxm is already assigned to node \"ha-476000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-gtnxm" node="ha-476000-m04"
	E0927 00:45:52.383371       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod c96b1801-d5cd-47bc-8555-43224fd5668c(kube-system/kindnet-gtnxm) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-gtnxm"
	E0927 00:45:52.383419       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-gtnxm\": pod kindnet-gtnxm is already assigned to node \"ha-476000-m04\"" pod="kube-system/kindnet-gtnxm"
	I0927 00:45:52.383438       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-gtnxm" node="ha-476000-m04"
	E0927 00:45:52.385915       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-5d8nb\": pod kube-proxy-5d8nb is already assigned to node \"ha-476000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-5d8nb" node="ha-476000-m04"
	E0927 00:45:52.386403       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 1e9c0178-2d58-4ca1-9fbe-c8e54d91bf1a(kube-system/kube-proxy-5d8nb) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-5d8nb"
	E0927 00:45:52.388489       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-5d8nb\": pod kube-proxy-5d8nb is already assigned to node \"ha-476000-m04\"" pod="kube-system/kube-proxy-5d8nb"
	I0927 00:45:52.388818       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-5d8nb" node="ha-476000-m04"
	E0927 00:45:52.414440       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-p2r4t\": pod kindnet-p2r4t is already assigned to node \"ha-476000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-p2r4t" node="ha-476000-m04"
	E0927 00:45:52.414491       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e7daae81-cf6d-498e-9458-8613a0c1f174(kube-system/kindnet-p2r4t) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-p2r4t"
	E0927 00:45:52.414504       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-p2r4t\": pod kindnet-p2r4t is already assigned to node \"ha-476000-m04\"" pod="kube-system/kindnet-p2r4t"
	I0927 00:45:52.414830       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-p2r4t" node="ha-476000-m04"
	E0927 00:45:52.434469       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-f2tbl\": pod kube-proxy-f2tbl is already assigned to node \"ha-476000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-f2tbl" node="ha-476000-m04"
	E0927 00:45:52.434547       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod ce1fa3d7-adbb-4d4d-bd23-a1e60ee54d5b(kube-system/kube-proxy-f2tbl) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-f2tbl"
	E0927 00:45:52.434998       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-f2tbl\": pod kube-proxy-f2tbl is already assigned to node \"ha-476000-m04\"" pod="kube-system/kube-proxy-f2tbl"
	I0927 00:45:52.435043       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-f2tbl" node="ha-476000-m04"
	I0927 00:47:41.631073       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0927 00:47:41.633242       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0927 00:47:41.634639       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0927 00:47:41.635978       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e923cc80604d] <==
	W0927 00:53:55.890712       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0927 00:53:55.890825       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:53:55.916618       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0927 00:53:55.916669       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:53:56.112443       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0927 00:53:56.112541       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:53:56.325586       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0927 00:53:56.325680       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:53:56.333523       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0927 00:53:56.333592       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:53:57.242866       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0927 00:53:57.243040       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:53:57.398430       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0927 00:53:57.398522       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:53:57.562966       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0927 00:53:57.563196       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:53:58.300576       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0927 00:53:58.300855       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:53:58.356734       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0927 00:53:58.356802       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:54:02.892809       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0927 00:54:02.892856       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 00:54:02.893077       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0927 00:54:02.893208       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0927 00:54:02.956308       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 27 00:54:01 ha-476000 kubelet[1581]: I0927 00:54:01.236450    1581 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="617d5efb7a14c0369e33fba284407db0" path="/var/lib/kubelet/pods/617d5efb7a14c0369e33fba284407db0/volumes"
	Sep 27 00:54:01 ha-476000 kubelet[1581]: I0927 00:54:01.850956    1581 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef132416f65d445e2be52f1f35d402e4103f11df5abe57373ffacf06538460a2"
	Sep 27 00:54:01 ha-476000 kubelet[1581]: I0927 00:54:01.898449    1581 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82fb727d3b4ab9beb6771fe42b02b13cfa819ec6e94565fc85eb5e44849131dc"
	Sep 27 00:54:01 ha-476000 kubelet[1581]: I0927 00:54:01.919692    1581 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c557d11d235a0ab874d2738bef5a997f95275377aa0e92ea879bcb3ddbec2481"
	Sep 27 00:54:02 ha-476000 kubelet[1581]: I0927 00:54:02.046801    1581 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ded079a0572139d8da280864d2cf23e26a7a74761427fdb6aa8247bf1b618b63"
	Sep 27 00:54:19 ha-476000 kubelet[1581]: I0927 00:54:19.211634    1581 scope.go:117] "RemoveContainer" containerID="3e1d19d36ca870b70f194e613fddfe9196146ec03c8bbb41afad1f4d75ce6405"
	Sep 27 00:54:19 ha-476000 kubelet[1581]: E0927 00:54:19.255670    1581 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 27 00:54:19 ha-476000 kubelet[1581]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 27 00:54:19 ha-476000 kubelet[1581]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 27 00:54:19 ha-476000 kubelet[1581]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 00:54:19 ha-476000 kubelet[1581]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 27 00:54:32 ha-476000 kubelet[1581]: I0927 00:54:32.420831    1581 scope.go:117] "RemoveContainer" containerID="4e07ad9ca26cc4761a54659f0b247156a2737aea8eb7e117dc886da3b1912592"
	Sep 27 00:54:32 ha-476000 kubelet[1581]: I0927 00:54:32.421022    1581 scope.go:117] "RemoveContainer" containerID="182d3576c4be84b985fdcac8be52d4bc1daa03061cca1c9af1ea1719cc87ef93"
	Sep 27 00:54:32 ha-476000 kubelet[1581]: E0927 00:54:32.421101    1581 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(e3e367a7-6cda-4177-a81d-7897333308d7)\"" pod="kube-system/storage-provisioner" podUID="e3e367a7-6cda-4177-a81d-7897333308d7"
	Sep 27 00:54:47 ha-476000 kubelet[1581]: I0927 00:54:47.232370    1581 scope.go:117] "RemoveContainer" containerID="182d3576c4be84b985fdcac8be52d4bc1daa03061cca1c9af1ea1719cc87ef93"
	Sep 27 00:55:19 ha-476000 kubelet[1581]: E0927 00:55:19.247407    1581 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 27 00:55:19 ha-476000 kubelet[1581]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 27 00:55:19 ha-476000 kubelet[1581]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 27 00:55:19 ha-476000 kubelet[1581]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 00:55:19 ha-476000 kubelet[1581]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 27 00:56:19 ha-476000 kubelet[1581]: E0927 00:56:19.247959    1581 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 27 00:56:19 ha-476000 kubelet[1581]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 27 00:56:19 ha-476000 kubelet[1581]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 27 00:56:19 ha-476000 kubelet[1581]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 00:56:19 ha-476000 kubelet[1581]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-476000 -n ha-476000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-476000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (219.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (4.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:413: expected profile "ha-476000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-476000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-476000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACo
unt\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-476000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\"
KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.169.0.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.169.0.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\"
:false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize
\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-476000 -n ha-476000
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-476000 logs -n 25: (3.338020503s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterClusterRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-476000 cp ha-476000-m03:/home/docker/cp-test.txt                                                                          | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | ha-476000-m04:/home/docker/cp-test_ha-476000-m03_ha-476000-m04.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-476000 ssh -n                                                                                                             | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | ha-476000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-476000 ssh -n ha-476000-m04 sudo cat                                                                                      | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | /home/docker/cp-test_ha-476000-m03_ha-476000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-476000 cp testdata/cp-test.txt                                                                                            | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | ha-476000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-476000 ssh -n                                                                                                             | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | ha-476000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-476000 cp ha-476000-m04:/home/docker/cp-test.txt                                                                          | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile3898402723/001/cp-test_ha-476000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-476000 ssh -n                                                                                                             | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | ha-476000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-476000 cp ha-476000-m04:/home/docker/cp-test.txt                                                                          | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | ha-476000:/home/docker/cp-test_ha-476000-m04_ha-476000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-476000 ssh -n                                                                                                             | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | ha-476000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-476000 ssh -n ha-476000 sudo cat                                                                                          | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | /home/docker/cp-test_ha-476000-m04_ha-476000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-476000 cp ha-476000-m04:/home/docker/cp-test.txt                                                                          | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | ha-476000-m02:/home/docker/cp-test_ha-476000-m04_ha-476000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-476000 ssh -n                                                                                                             | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | ha-476000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-476000 ssh -n ha-476000-m02 sudo cat                                                                                      | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | /home/docker/cp-test_ha-476000-m04_ha-476000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-476000 cp ha-476000-m04:/home/docker/cp-test.txt                                                                          | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | ha-476000-m03:/home/docker/cp-test_ha-476000-m04_ha-476000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-476000 ssh -n                                                                                                             | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | ha-476000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-476000 ssh -n ha-476000-m03 sudo cat                                                                                      | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | /home/docker/cp-test_ha-476000-m04_ha-476000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-476000 node stop m02 -v=7                                                                                                 | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-476000 node start m02 -v=7                                                                                                | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:47 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-476000 -v=7                                                                                                       | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:47 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-476000 -v=7                                                                                                            | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:47 PDT | 26 Sep 24 17:47 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-476000 --wait=true -v=7                                                                                                | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:47 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-476000                                                                                                            | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:49 PDT |                     |
	| node    | ha-476000 node delete m03 -v=7                                                                                               | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:49 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | ha-476000 stop -v=7                                                                                                          | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:49 PDT | 26 Sep 24 17:53 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-476000 --wait=true                                                                                                     | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:53 PDT |                     |
	|         | -v=7 --alsologtostderr                                                                                                       |           |         |         |                     |                     |
	|         | --driver=hyperkit                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/26 17:53:00
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 17:53:00.467998    4178 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:53:00.468247    4178 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:53:00.468252    4178 out.go:358] Setting ErrFile to fd 2...
	I0926 17:53:00.468256    4178 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:53:00.468436    4178 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1128/.minikube/bin
	I0926 17:53:00.469901    4178 out.go:352] Setting JSON to false
	I0926 17:53:00.492370    4178 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3150,"bootTime":1727395230,"procs":437,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0926 17:53:00.492530    4178 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 17:53:00.514400    4178 out.go:177] * [ha-476000] minikube v1.34.0 on Darwin 14.6.1
	I0926 17:53:00.557228    4178 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 17:53:00.557300    4178 notify.go:220] Checking for updates...
	I0926 17:53:00.599719    4178 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1128/kubeconfig
	I0926 17:53:00.621009    4178 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0926 17:53:00.642091    4178 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 17:53:00.662936    4178 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1128/.minikube
	I0926 17:53:00.684204    4178 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 17:53:00.705550    4178 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:53:00.706120    4178 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:53:00.706169    4178 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:53:00.715431    4178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52037
	I0926 17:53:00.715807    4178 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:53:00.716207    4178 main.go:141] libmachine: Using API Version  1
	I0926 17:53:00.716243    4178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:53:00.716493    4178 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:53:00.716626    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:00.716833    4178 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 17:53:00.717101    4178 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:53:00.717132    4178 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:53:00.725380    4178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52039
	I0926 17:53:00.725706    4178 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:53:00.726059    4178 main.go:141] libmachine: Using API Version  1
	I0926 17:53:00.726076    4178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:53:00.726325    4178 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:53:00.726449    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:00.754773    4178 out.go:177] * Using the hyperkit driver based on existing profile
	I0926 17:53:00.797071    4178 start.go:297] selected driver: hyperkit
	I0926 17:53:00.797101    4178 start.go:901] validating driver "hyperkit" against &{Name:ha-476000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:ha-476000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclas
s:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 17:53:00.797347    4178 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 17:53:00.797543    4178 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 17:53:00.797758    4178 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19711-1128/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0926 17:53:00.807380    4178 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0926 17:53:00.811121    4178 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:53:00.811145    4178 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0926 17:53:00.813743    4178 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 17:53:00.813780    4178 cni.go:84] Creating CNI manager for ""
	I0926 17:53:00.813817    4178 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0926 17:53:00.813892    4178 start.go:340] cluster config:
	{Name:ha-476000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-476000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inacce
l:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 17:53:00.814010    4178 iso.go:125] acquiring lock: {Name:mka8a9c5a237c1e4ae233281d2ff7965d13a843d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 17:53:00.856015    4178 out.go:177] * Starting "ha-476000" primary control-plane node in "ha-476000" cluster
	I0926 17:53:00.877127    4178 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 17:53:00.877240    4178 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0926 17:53:00.877263    4178 cache.go:56] Caching tarball of preloaded images
	I0926 17:53:00.877457    4178 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0926 17:53:00.877476    4178 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 17:53:00.877658    4178 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/config.json ...
	I0926 17:53:00.878610    4178 start.go:360] acquireMachinesLock for ha-476000: {Name:mk62b0ec9b6170d1971239573141a625c4a2621e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 17:53:00.878759    4178 start.go:364] duration metric: took 97.008µs to acquireMachinesLock for "ha-476000"
	I0926 17:53:00.878828    4178 start.go:96] Skipping create...Using existing machine configuration
	I0926 17:53:00.878843    4178 fix.go:54] fixHost starting: 
	I0926 17:53:00.879324    4178 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:53:00.879362    4178 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:53:00.888435    4178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52041
	I0926 17:53:00.888799    4178 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:53:00.889164    4178 main.go:141] libmachine: Using API Version  1
	I0926 17:53:00.889177    4178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:53:00.889396    4178 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:53:00.889518    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:00.889616    4178 main.go:141] libmachine: (ha-476000) Calling .GetState
	I0926 17:53:00.889695    4178 main.go:141] libmachine: (ha-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:53:00.889775    4178 main.go:141] libmachine: (ha-476000) DBG | hyperkit pid from json: 4068
	I0926 17:53:00.890689    4178 main.go:141] libmachine: (ha-476000) DBG | hyperkit pid 4068 missing from process table
	I0926 17:53:00.890720    4178 fix.go:112] recreateIfNeeded on ha-476000: state=Stopped err=<nil>
	I0926 17:53:00.890735    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	W0926 17:53:00.890819    4178 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 17:53:00.933253    4178 out.go:177] * Restarting existing hyperkit VM for "ha-476000" ...
	I0926 17:53:00.956221    4178 main.go:141] libmachine: (ha-476000) Calling .Start
	I0926 17:53:00.956482    4178 main.go:141] libmachine: (ha-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:53:00.956522    4178 main.go:141] libmachine: (ha-476000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/hyperkit.pid
	I0926 17:53:00.958313    4178 main.go:141] libmachine: (ha-476000) DBG | hyperkit pid 4068 missing from process table
	I0926 17:53:00.958323    4178 main.go:141] libmachine: (ha-476000) DBG | pid 4068 is in state "Stopped"
	I0926 17:53:00.958337    4178 main.go:141] libmachine: (ha-476000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/hyperkit.pid...
	I0926 17:53:00.958705    4178 main.go:141] libmachine: (ha-476000) DBG | Using UUID 99cfbb80-3e9d-4d4f-a72a-447af4e3b1db
	I0926 17:53:01.067490    4178 main.go:141] libmachine: (ha-476000) DBG | Generated MAC 96:a2:4a:f3:be:4a
	I0926 17:53:01.067521    4178 main.go:141] libmachine: (ha-476000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000
	I0926 17:53:01.067590    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"99cfbb80-3e9d-4d4f-a72a-447af4e3b1db", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bea80)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0926 17:53:01.067614    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"99cfbb80-3e9d-4d4f-a72a-447af4e3b1db", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bea80)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0926 17:53:01.067680    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "99cfbb80-3e9d-4d4f-a72a-447af4e3b1db", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/ha-476000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/bzimage,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000"}
	I0926 17:53:01.067717    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 99cfbb80-3e9d-4d4f-a72a-447af4e3b1db -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/ha-476000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/console-ring -f kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/bzimage,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000"
	I0926 17:53:01.067731    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0926 17:53:01.069340    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 DEBUG: hyperkit: Pid is 4191
	I0926 17:53:01.069679    4178 main.go:141] libmachine: (ha-476000) DBG | Attempt 0
	I0926 17:53:01.069693    4178 main.go:141] libmachine: (ha-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:53:01.069753    4178 main.go:141] libmachine: (ha-476000) DBG | hyperkit pid from json: 4191
	I0926 17:53:01.071639    4178 main.go:141] libmachine: (ha-476000) DBG | Searching for 96:a2:4a:f3:be:4a in /var/db/dhcpd_leases ...
	I0926 17:53:01.071694    4178 main.go:141] libmachine: (ha-476000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0926 17:53:01.071711    4178 main.go:141] libmachine: (ha-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f7523f}
	I0926 17:53:01.071719    4178 main.go:141] libmachine: (ha-476000) DBG | Found match: 96:a2:4a:f3:be:4a
	I0926 17:53:01.071724    4178 main.go:141] libmachine: (ha-476000) DBG | IP: 192.169.0.5
	I0926 17:53:01.071801    4178 main.go:141] libmachine: (ha-476000) Calling .GetConfigRaw
	I0926 17:53:01.072466    4178 main.go:141] libmachine: (ha-476000) Calling .GetIP
	I0926 17:53:01.072682    4178 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/config.json ...
	I0926 17:53:01.073265    4178 machine.go:93] provisionDockerMachine start ...
	I0926 17:53:01.073276    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:01.073432    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:01.073553    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:01.073654    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:01.073744    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:01.073824    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:01.073962    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:01.074151    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0926 17:53:01.074160    4178 main.go:141] libmachine: About to run SSH command:
	hostname
	I0926 17:53:01.077803    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I0926 17:53:01.131821    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0926 17:53:01.132498    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 17:53:01.132519    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 17:53:01.132527    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 17:53:01.132535    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 17:53:01.515934    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0926 17:53:01.515948    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0926 17:53:01.630853    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 17:53:01.630870    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 17:53:01.630880    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 17:53:01.630889    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 17:53:01.631762    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0926 17:53:01.631773    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0926 17:53:07.224844    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:07 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0926 17:53:07.224979    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:07 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0926 17:53:07.224989    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:07 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0926 17:53:07.249067    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:07 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0926 17:53:12.148094    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0926 17:53:12.148109    4178 main.go:141] libmachine: (ha-476000) Calling .GetMachineName
	I0926 17:53:12.148318    4178 buildroot.go:166] provisioning hostname "ha-476000"
	I0926 17:53:12.148328    4178 main.go:141] libmachine: (ha-476000) Calling .GetMachineName
	I0926 17:53:12.148430    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:12.148546    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:12.148649    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.148741    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.148844    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:12.148986    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:12.149192    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0926 17:53:12.149200    4178 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-476000 && echo "ha-476000" | sudo tee /etc/hostname
	I0926 17:53:12.225889    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-476000
	
	I0926 17:53:12.225907    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:12.226039    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:12.226125    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.226235    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.226313    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:12.226463    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:12.226601    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0926 17:53:12.226612    4178 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-476000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-476000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-476000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0926 17:53:12.298491    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 17:53:12.298512    4178 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19711-1128/.minikube CaCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19711-1128/.minikube}
	I0926 17:53:12.298531    4178 buildroot.go:174] setting up certificates
	I0926 17:53:12.298537    4178 provision.go:84] configureAuth start
	I0926 17:53:12.298544    4178 main.go:141] libmachine: (ha-476000) Calling .GetMachineName
	I0926 17:53:12.298672    4178 main.go:141] libmachine: (ha-476000) Calling .GetIP
	I0926 17:53:12.298777    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:12.298858    4178 provision.go:143] copyHostCerts
	I0926 17:53:12.298890    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem
	I0926 17:53:12.298959    4178 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem, removing ...
	I0926 17:53:12.298968    4178 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem
	I0926 17:53:12.299110    4178 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem (1082 bytes)
	I0926 17:53:12.299320    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem
	I0926 17:53:12.299359    4178 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem, removing ...
	I0926 17:53:12.299364    4178 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem
	I0926 17:53:12.299452    4178 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem (1123 bytes)
	I0926 17:53:12.299596    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem
	I0926 17:53:12.299633    4178 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem, removing ...
	I0926 17:53:12.299638    4178 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem
	I0926 17:53:12.299717    4178 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem (1675 bytes)
	I0926 17:53:12.299883    4178 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem org=jenkins.ha-476000 san=[127.0.0.1 192.169.0.5 ha-476000 localhost minikube]
	I0926 17:53:12.619231    4178 provision.go:177] copyRemoteCerts
	I0926 17:53:12.619306    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0926 17:53:12.619328    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:12.619499    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:12.619617    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.619721    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:12.619805    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/id_rsa Username:docker}
	I0926 17:53:12.659598    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0926 17:53:12.659672    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0926 17:53:12.679552    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0926 17:53:12.679620    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0926 17:53:12.699069    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0926 17:53:12.699141    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0926 17:53:12.718755    4178 provision.go:87] duration metric: took 420.20261ms to configureAuth
	I0926 17:53:12.718767    4178 buildroot.go:189] setting minikube options for container-runtime
	I0926 17:53:12.718921    4178 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:53:12.718934    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:12.719072    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:12.719167    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:12.719255    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.719341    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.719422    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:12.719544    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:12.719669    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0926 17:53:12.719676    4178 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0926 17:53:12.785771    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0926 17:53:12.785788    4178 buildroot.go:70] root file system type: tmpfs
	I0926 17:53:12.785872    4178 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0926 17:53:12.785886    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:12.786022    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:12.786110    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.786193    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.786273    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:12.786415    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:12.786558    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0926 17:53:12.786601    4178 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0926 17:53:12.862455    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0926 17:53:12.862477    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:12.862607    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:12.862705    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.862800    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.862882    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:12.863016    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:12.863156    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0926 17:53:12.863169    4178 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0926 17:53:14.510518    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0926 17:53:14.510534    4178 machine.go:96] duration metric: took 13.437211612s to provisionDockerMachine
	I0926 17:53:14.510545    4178 start.go:293] postStartSetup for "ha-476000" (driver="hyperkit")
	I0926 17:53:14.510553    4178 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0926 17:53:14.510563    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:14.510765    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0926 17:53:14.510780    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:14.510875    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:14.510981    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:14.511085    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:14.511186    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/id_rsa Username:docker}
	I0926 17:53:14.553095    4178 ssh_runner.go:195] Run: cat /etc/os-release
	I0926 17:53:14.556852    4178 info.go:137] Remote host: Buildroot 2023.02.9
	I0926 17:53:14.556867    4178 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1128/.minikube/addons for local assets ...
	I0926 17:53:14.556973    4178 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1128/.minikube/files for local assets ...
	I0926 17:53:14.557159    4178 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> 16792.pem in /etc/ssl/certs
	I0926 17:53:14.557167    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> /etc/ssl/certs/16792.pem
	I0926 17:53:14.557383    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0926 17:53:14.567060    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem --> /etc/ssl/certs/16792.pem (1708 bytes)
	I0926 17:53:14.600616    4178 start.go:296] duration metric: took 90.060103ms for postStartSetup
	I0926 17:53:14.600637    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:14.600819    4178 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0926 17:53:14.600832    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:14.600912    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:14.600992    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:14.601061    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:14.601150    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/id_rsa Username:docker}
	I0926 17:53:14.640650    4178 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0926 17:53:14.640716    4178 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0926 17:53:14.694957    4178 fix.go:56] duration metric: took 13.816065248s for fixHost
	I0926 17:53:14.694980    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:14.695115    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:14.695206    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:14.695301    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:14.695399    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:14.695527    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:14.695674    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0926 17:53:14.695682    4178 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0926 17:53:14.760098    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727398394.872717718
	
	I0926 17:53:14.760109    4178 fix.go:216] guest clock: 1727398394.872717718
	I0926 17:53:14.760115    4178 fix.go:229] Guest: 2024-09-26 17:53:14.872717718 -0700 PDT Remote: 2024-09-26 17:53:14.69497 -0700 PDT m=+14.262859348 (delta=177.747718ms)
	I0926 17:53:14.760134    4178 fix.go:200] guest clock delta is within tolerance: 177.747718ms
	I0926 17:53:14.760137    4178 start.go:83] releasing machines lock for "ha-476000", held for 13.881299475s
	I0926 17:53:14.760155    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:14.760297    4178 main.go:141] libmachine: (ha-476000) Calling .GetIP
	I0926 17:53:14.760395    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:14.760729    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:14.760850    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:14.760950    4178 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0926 17:53:14.760987    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:14.761013    4178 ssh_runner.go:195] Run: cat /version.json
	I0926 17:53:14.761025    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:14.761099    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:14.761116    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:14.761194    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:14.761205    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:14.761304    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:14.761313    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:14.761398    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/id_rsa Username:docker}
	I0926 17:53:14.761432    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/id_rsa Username:docker}
	I0926 17:53:14.795855    4178 ssh_runner.go:195] Run: systemctl --version
	I0926 17:53:14.843523    4178 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0926 17:53:14.848548    4178 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0926 17:53:14.848602    4178 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0926 17:53:14.862277    4178 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0926 17:53:14.862289    4178 start.go:495] detecting cgroup driver to use...
	I0926 17:53:14.862388    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 17:53:14.879332    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0926 17:53:14.888407    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0926 17:53:14.897249    4178 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0926 17:53:14.897300    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0926 17:53:14.906191    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 17:53:14.914943    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0926 17:53:14.923611    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 17:53:14.932390    4178 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0926 17:53:14.941382    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0926 17:53:14.950233    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0926 17:53:14.959047    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0926 17:53:14.967887    4178 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0926 17:53:14.975975    4178 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0926 17:53:14.976018    4178 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0926 17:53:14.985185    4178 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0926 17:53:14.993181    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:15.086628    4178 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0926 17:53:15.106310    4178 start.go:495] detecting cgroup driver to use...
	I0926 17:53:15.106396    4178 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0926 17:53:15.118546    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 17:53:15.129665    4178 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0926 17:53:15.143061    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 17:53:15.154154    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 17:53:15.164978    4178 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0926 17:53:15.188125    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 17:53:15.199509    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 17:53:15.214608    4178 ssh_runner.go:195] Run: which cri-dockerd
	I0926 17:53:15.217523    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0926 17:53:15.225391    4178 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0926 17:53:15.238858    4178 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0926 17:53:15.337444    4178 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0926 17:53:15.437802    4178 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0926 17:53:15.437879    4178 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0926 17:53:15.451733    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:15.563208    4178 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0926 17:53:17.891140    4178 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.327906141s)
	I0926 17:53:17.891209    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0926 17:53:17.902729    4178 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0926 17:53:17.915694    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0926 17:53:17.926164    4178 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0926 17:53:18.028587    4178 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0926 17:53:18.135687    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:18.246049    4178 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0926 17:53:18.259788    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0926 17:53:18.270995    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:18.379007    4178 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0926 17:53:18.442458    4178 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0926 17:53:18.442555    4178 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0926 17:53:18.447167    4178 start.go:563] Will wait 60s for crictl version
	I0926 17:53:18.447233    4178 ssh_runner.go:195] Run: which crictl
	I0926 17:53:18.450364    4178 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0926 17:53:18.474973    4178 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I0926 17:53:18.475082    4178 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0926 17:53:18.492744    4178 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0926 17:53:18.534852    4178 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I0926 17:53:18.534897    4178 main.go:141] libmachine: (ha-476000) Calling .GetIP
	I0926 17:53:18.535304    4178 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0926 17:53:18.539884    4178 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 17:53:18.549924    4178 kubeadm.go:883] updating cluster {Name:ha-476000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:ha-476000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0926 17:53:18.550017    4178 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 17:53:18.550087    4178 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0926 17:53:18.562413    4178 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0926 17:53:18.562429    4178 docker.go:615] Images already preloaded, skipping extraction
	I0926 17:53:18.562517    4178 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0926 17:53:18.574107    4178 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0926 17:53:18.574127    4178 cache_images.go:84] Images are preloaded, skipping loading
	I0926 17:53:18.574137    4178 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.1 docker true true} ...
	I0926 17:53:18.574213    4178 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-476000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-476000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0926 17:53:18.574296    4178 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0926 17:53:18.611557    4178 cni.go:84] Creating CNI manager for ""
	I0926 17:53:18.611571    4178 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0926 17:53:18.611586    4178 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0926 17:53:18.611607    4178 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-476000 NodeName:ha-476000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0926 17:53:18.611700    4178 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-476000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0926 17:53:18.611713    4178 kube-vip.go:115] generating kube-vip config ...
	I0926 17:53:18.611769    4178 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0926 17:53:18.624452    4178 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0926 17:53:18.624524    4178 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0926 17:53:18.624583    4178 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0926 17:53:18.632661    4178 binaries.go:44] Found k8s binaries, skipping transfer
	I0926 17:53:18.632722    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0926 17:53:18.640016    4178 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0926 17:53:18.653424    4178 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0926 17:53:18.666861    4178 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0926 17:53:18.680665    4178 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0926 17:53:18.694237    4178 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0926 17:53:18.697273    4178 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 17:53:18.706489    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:18.799127    4178 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 17:53:18.813428    4178 certs.go:68] Setting up /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000 for IP: 192.169.0.5
	I0926 17:53:18.813441    4178 certs.go:194] generating shared ca certs ...
	I0926 17:53:18.813450    4178 certs.go:226] acquiring lock for ca certs: {Name:mk3d4665cb5756ae49ea6f7cd2a58d273aecc507 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:53:18.813627    4178 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.key
	I0926 17:53:18.813697    4178 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.key
	I0926 17:53:18.813709    4178 certs.go:256] generating profile certs ...
	I0926 17:53:18.813816    4178 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/client.key
	I0926 17:53:18.813837    4178 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key.961e0ed9
	I0926 17:53:18.813853    4178 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt.961e0ed9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
	I0926 17:53:19.198737    4178 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt.961e0ed9 ...
	I0926 17:53:19.198759    4178 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt.961e0ed9: {Name:mkf72026f41cf052c5981dfd73bcc3ea46813a38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:53:19.199347    4178 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key.961e0ed9 ...
	I0926 17:53:19.199358    4178 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key.961e0ed9: {Name:mkb6fc9895bd700bb149434e702cedd545112b66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:53:19.199565    4178 certs.go:381] copying /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt.961e0ed9 -> /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt
	I0926 17:53:19.199778    4178 certs.go:385] copying /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key.961e0ed9 -> /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key
	I0926 17:53:19.200020    4178 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.key
	I0926 17:53:19.200030    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0926 17:53:19.200052    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0926 17:53:19.200071    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0926 17:53:19.200089    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0926 17:53:19.200107    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0926 17:53:19.200125    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0926 17:53:19.200142    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0926 17:53:19.200160    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0926 17:53:19.200250    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679.pem (1338 bytes)
	W0926 17:53:19.200297    4178 certs.go:480] ignoring /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679_empty.pem, impossibly tiny 0 bytes
	I0926 17:53:19.200306    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem (1675 bytes)
	I0926 17:53:19.200335    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem (1082 bytes)
	I0926 17:53:19.200365    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem (1123 bytes)
	I0926 17:53:19.200393    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem (1675 bytes)
	I0926 17:53:19.200455    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem (1708 bytes)
	I0926 17:53:19.200488    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679.pem -> /usr/share/ca-certificates/1679.pem
	I0926 17:53:19.200508    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> /usr/share/ca-certificates/16792.pem
	I0926 17:53:19.200526    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:53:19.200943    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0926 17:53:19.229781    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0926 17:53:19.249730    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0926 17:53:19.269922    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0926 17:53:19.290358    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0926 17:53:19.309964    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0926 17:53:19.329782    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0926 17:53:19.349170    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0926 17:53:19.368557    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679.pem --> /usr/share/ca-certificates/1679.pem (1338 bytes)
	I0926 17:53:19.388315    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem --> /usr/share/ca-certificates/16792.pem (1708 bytes)
	I0926 17:53:19.407646    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0926 17:53:19.427156    4178 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0926 17:53:19.441065    4178 ssh_runner.go:195] Run: openssl version
	I0926 17:53:19.445301    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1679.pem && ln -fs /usr/share/ca-certificates/1679.pem /etc/ssl/certs/1679.pem"
	I0926 17:53:19.453728    4178 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1679.pem
	I0926 17:53:19.457317    4178 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:36 /usr/share/ca-certificates/1679.pem
	I0926 17:53:19.457357    4178 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1679.pem
	I0926 17:53:19.461742    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1679.pem /etc/ssl/certs/51391683.0"
	I0926 17:53:19.470198    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16792.pem && ln -fs /usr/share/ca-certificates/16792.pem /etc/ssl/certs/16792.pem"
	I0926 17:53:19.478616    4178 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16792.pem
	I0926 17:53:19.482140    4178 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:36 /usr/share/ca-certificates/16792.pem
	I0926 17:53:19.482201    4178 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16792.pem
	I0926 17:53:19.486473    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16792.pem /etc/ssl/certs/3ec20f2e.0"
	I0926 17:53:19.494777    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0926 17:53:19.503295    4178 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:53:19.506902    4178 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:14 /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:53:19.506943    4178 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:53:19.511360    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0926 17:53:19.519826    4178 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0926 17:53:19.523465    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0926 17:53:19.528006    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0926 17:53:19.532444    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0926 17:53:19.537126    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0926 17:53:19.541512    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0926 17:53:19.545827    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0926 17:53:19.550166    4178 kubeadm.go:392] StartCluster: {Name:ha-476000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-476000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 17:53:19.550298    4178 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0926 17:53:19.561803    4178 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0926 17:53:19.569639    4178 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0926 17:53:19.569650    4178 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0926 17:53:19.569698    4178 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0926 17:53:19.577403    4178 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0926 17:53:19.577718    4178 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-476000" does not appear in /Users/jenkins/minikube-integration/19711-1128/kubeconfig
	I0926 17:53:19.577801    4178 kubeconfig.go:62] /Users/jenkins/minikube-integration/19711-1128/kubeconfig needs updating (will repair): [kubeconfig missing "ha-476000" cluster setting kubeconfig missing "ha-476000" context setting]
	I0926 17:53:19.577967    4178 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1128/kubeconfig: {Name:mkb9c069c578c490d8cb734b05a21821e9215482 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:53:19.578378    4178 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19711-1128/kubeconfig
	I0926 17:53:19.578577    4178 kapi.go:59] client config for ha-476000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/client.key", CAFile:"/Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7459f00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0926 17:53:19.578890    4178 cert_rotation.go:140] Starting client certificate rotation controller
	I0926 17:53:19.579075    4178 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0926 17:53:19.586457    4178 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0926 17:53:19.586468    4178 kubeadm.go:597] duration metric: took 16.814329ms to restartPrimaryControlPlane
	I0926 17:53:19.586474    4178 kubeadm.go:394] duration metric: took 36.313109ms to StartCluster
	I0926 17:53:19.586484    4178 settings.go:142] acquiring lock: {Name:mka8948d0f70add5c5f20f2eca7124a97a496c1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:53:19.586556    4178 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19711-1128/kubeconfig
	I0926 17:53:19.586877    4178 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1128/kubeconfig: {Name:mkb9c069c578c490d8cb734b05a21821e9215482 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:53:19.587096    4178 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 17:53:19.587108    4178 start.go:241] waiting for startup goroutines ...
	I0926 17:53:19.587128    4178 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0926 17:53:19.587252    4178 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:53:19.629430    4178 out.go:177] * Enabled addons: 
	I0926 17:53:19.650423    4178 addons.go:510] duration metric: took 63.269239ms for enable addons: enabled=[]
	I0926 17:53:19.650464    4178 start.go:246] waiting for cluster config update ...
	I0926 17:53:19.650475    4178 start.go:255] writing updated cluster config ...
	I0926 17:53:19.672508    4178 out.go:201] 
	I0926 17:53:19.693989    4178 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:53:19.694118    4178 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/config.json ...
	I0926 17:53:19.716427    4178 out.go:177] * Starting "ha-476000-m02" control-plane node in "ha-476000" cluster
	I0926 17:53:19.758555    4178 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 17:53:19.758588    4178 cache.go:56] Caching tarball of preloaded images
	I0926 17:53:19.758767    4178 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0926 17:53:19.758785    4178 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 17:53:19.758898    4178 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/config.json ...
	I0926 17:53:19.759817    4178 start.go:360] acquireMachinesLock for ha-476000-m02: {Name:mk62b0ec9b6170d1971239573141a625c4a2621e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 17:53:19.759922    4178 start.go:364] duration metric: took 80.364µs to acquireMachinesLock for "ha-476000-m02"
	I0926 17:53:19.759947    4178 start.go:96] Skipping create...Using existing machine configuration
	I0926 17:53:19.759956    4178 fix.go:54] fixHost starting: m02
	I0926 17:53:19.760406    4178 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:53:19.760442    4178 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:53:19.769605    4178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52063
	I0926 17:53:19.770014    4178 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:53:19.770353    4178 main.go:141] libmachine: Using API Version  1
	I0926 17:53:19.770365    4178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:53:19.770608    4178 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:53:19.770743    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	I0926 17:53:19.770835    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetState
	I0926 17:53:19.770922    4178 main.go:141] libmachine: (ha-476000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:53:19.771000    4178 main.go:141] libmachine: (ha-476000-m02) DBG | hyperkit pid from json: 4002
	I0926 17:53:19.771916    4178 main.go:141] libmachine: (ha-476000-m02) DBG | hyperkit pid 4002 missing from process table
	I0926 17:53:19.771940    4178 fix.go:112] recreateIfNeeded on ha-476000-m02: state=Stopped err=<nil>
	I0926 17:53:19.771957    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	W0926 17:53:19.772037    4178 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 17:53:19.814436    4178 out.go:177] * Restarting existing hyperkit VM for "ha-476000-m02" ...
	I0926 17:53:19.835535    4178 main.go:141] libmachine: (ha-476000-m02) Calling .Start
	I0926 17:53:19.835810    4178 main.go:141] libmachine: (ha-476000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:53:19.835874    4178 main.go:141] libmachine: (ha-476000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/hyperkit.pid
	I0926 17:53:19.837665    4178 main.go:141] libmachine: (ha-476000-m02) DBG | hyperkit pid 4002 missing from process table
	I0926 17:53:19.837678    4178 main.go:141] libmachine: (ha-476000-m02) DBG | pid 4002 is in state "Stopped"
	I0926 17:53:19.837694    4178 main.go:141] libmachine: (ha-476000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/hyperkit.pid...
	I0926 17:53:19.838041    4178 main.go:141] libmachine: (ha-476000-m02) DBG | Using UUID 58f499c4-942a-445b-bae0-ab27a7b8106e
	I0926 17:53:19.865707    4178 main.go:141] libmachine: (ha-476000-m02) DBG | Generated MAC 9e:5:36:80:93:e3
	I0926 17:53:19.865728    4178 main.go:141] libmachine: (ha-476000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000
	I0926 17:53:19.865872    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"58f499c4-942a-445b-bae0-ab27a7b8106e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aca80)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0926 17:53:19.865901    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"58f499c4-942a-445b-bae0-ab27a7b8106e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aca80)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0926 17:53:19.865946    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "58f499c4-942a-445b-bae0-ab27a7b8106e", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/ha-476000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/bzimage,/Users/jenkins/minikube-integration/19711-1128/.minikube/machine
s/ha-476000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000"}
	I0926 17:53:19.866020    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 58f499c4-942a-445b-bae0-ab27a7b8106e -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/ha-476000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/bzimage,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000"
	I0926 17:53:19.866041    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0926 17:53:19.867306    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 DEBUG: hyperkit: Pid is 4198
	I0926 17:53:19.867704    4178 main.go:141] libmachine: (ha-476000-m02) DBG | Attempt 0
	I0926 17:53:19.867718    4178 main.go:141] libmachine: (ha-476000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:53:19.867787    4178 main.go:141] libmachine: (ha-476000-m02) DBG | hyperkit pid from json: 4198
	I0926 17:53:19.869727    4178 main.go:141] libmachine: (ha-476000-m02) DBG | Searching for 9e:5:36:80:93:e3 in /var/db/dhcpd_leases ...
	I0926 17:53:19.869759    4178 main.go:141] libmachine: (ha-476000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0926 17:53:19.869772    4178 main.go:141] libmachine: (ha-476000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 17:53:19.869793    4178 main.go:141] libmachine: (ha-476000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 17:53:19.869821    4178 main.go:141] libmachine: (ha-476000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f751f8}
	I0926 17:53:19.869834    4178 main.go:141] libmachine: (ha-476000-m02) DBG | Found match: 9e:5:36:80:93:e3
	I0926 17:53:19.869848    4178 main.go:141] libmachine: (ha-476000-m02) DBG | IP: 192.169.0.6
	I0926 17:53:19.869914    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetConfigRaw
	I0926 17:53:19.870579    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetIP
	I0926 17:53:19.870762    4178 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/config.json ...
	I0926 17:53:19.871158    4178 machine.go:93] provisionDockerMachine start ...
	I0926 17:53:19.871172    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	I0926 17:53:19.871294    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:19.871392    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:19.871530    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:19.871631    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:19.871718    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:19.871893    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:19.872031    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0926 17:53:19.872038    4178 main.go:141] libmachine: About to run SSH command:
	hostname
	I0926 17:53:19.875766    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I0926 17:53:19.884496    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0926 17:53:19.885379    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 17:53:19.885391    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 17:53:19.885398    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 17:53:19.885403    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 17:53:20.270703    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:20 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0926 17:53:20.270718    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:20 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0926 17:53:20.385412    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 17:53:20.385431    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 17:53:20.385441    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 17:53:20.385468    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 17:53:20.386358    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:20 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0926 17:53:20.386369    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:20 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0926 17:53:25.988386    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:25 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0926 17:53:25.988424    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:25 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0926 17:53:25.988435    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:25 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0926 17:53:26.012163    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:26 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0926 17:53:30.140708    4178 main.go:141] libmachine: Error dialing TCP: dial tcp 192.169.0.6:22: connect: connection refused
	I0926 17:53:33.199866    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0926 17:53:33.199881    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetMachineName
	I0926 17:53:33.200004    4178 buildroot.go:166] provisioning hostname "ha-476000-m02"
	I0926 17:53:33.200013    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetMachineName
	I0926 17:53:33.200123    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:33.200213    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:33.200322    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.200426    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.200540    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:33.200702    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:33.200858    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0926 17:53:33.200867    4178 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-476000-m02 && echo "ha-476000-m02" | sudo tee /etc/hostname
	I0926 17:53:33.269037    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-476000-m02
	
	I0926 17:53:33.269056    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:33.269193    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:33.269285    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.269368    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.269450    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:33.269573    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:33.269735    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0926 17:53:33.269746    4178 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-476000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-476000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-476000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0926 17:53:33.331289    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 17:53:33.331305    4178 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19711-1128/.minikube CaCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19711-1128/.minikube}
	I0926 17:53:33.331314    4178 buildroot.go:174] setting up certificates
	I0926 17:53:33.331321    4178 provision.go:84] configureAuth start
	I0926 17:53:33.331328    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetMachineName
	I0926 17:53:33.331463    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetIP
	I0926 17:53:33.331556    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:33.331643    4178 provision.go:143] copyHostCerts
	I0926 17:53:33.331674    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem
	I0926 17:53:33.331734    4178 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem, removing ...
	I0926 17:53:33.331740    4178 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem
	I0926 17:53:33.331856    4178 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem (1082 bytes)
	I0926 17:53:33.332044    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem
	I0926 17:53:33.332093    4178 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem, removing ...
	I0926 17:53:33.332098    4178 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem
	I0926 17:53:33.332176    4178 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem (1123 bytes)
	I0926 17:53:33.332314    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem
	I0926 17:53:33.332352    4178 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem, removing ...
	I0926 17:53:33.332356    4178 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem
	I0926 17:53:33.332427    4178 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem (1675 bytes)
	I0926 17:53:33.332570    4178 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem org=jenkins.ha-476000-m02 san=[127.0.0.1 192.169.0.6 ha-476000-m02 localhost minikube]
	I0926 17:53:33.395607    4178 provision.go:177] copyRemoteCerts
	I0926 17:53:33.395696    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0926 17:53:33.395715    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:33.395906    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:33.396015    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.396100    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:33.396196    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/id_rsa Username:docker}
	I0926 17:53:33.431740    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0926 17:53:33.431806    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0926 17:53:33.452053    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0926 17:53:33.452106    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0926 17:53:33.471760    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0926 17:53:33.471825    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0926 17:53:33.490896    4178 provision.go:87] duration metric: took 159.567474ms to configureAuth
	I0926 17:53:33.490910    4178 buildroot.go:189] setting minikube options for container-runtime
	I0926 17:53:33.491086    4178 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:53:33.491099    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	I0926 17:53:33.491231    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:33.491321    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:33.491413    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.491498    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.491591    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:33.491713    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:33.491847    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0926 17:53:33.491854    4178 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0926 17:53:33.547403    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0926 17:53:33.547417    4178 buildroot.go:70] root file system type: tmpfs
	I0926 17:53:33.547504    4178 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0926 17:53:33.547518    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:33.547665    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:33.547775    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.547896    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.547997    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:33.548125    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:33.548268    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0926 17:53:33.548312    4178 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0926 17:53:33.613348    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0926 17:53:33.613367    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:33.613495    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:33.613582    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.613661    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.613747    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:33.613879    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:33.614018    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0926 17:53:33.614033    4178 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0926 17:53:35.261247    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0926 17:53:35.261262    4178 machine.go:96] duration metric: took 15.390039559s to provisionDockerMachine
	I0926 17:53:35.261270    4178 start.go:293] postStartSetup for "ha-476000-m02" (driver="hyperkit")
	I0926 17:53:35.261294    4178 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0926 17:53:35.261308    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	I0926 17:53:35.261509    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0926 17:53:35.261522    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:35.261612    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:35.261704    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:35.261809    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:35.261922    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/id_rsa Username:docker}
	I0926 17:53:35.302268    4178 ssh_runner.go:195] Run: cat /etc/os-release
	I0926 17:53:35.305656    4178 info.go:137] Remote host: Buildroot 2023.02.9
	I0926 17:53:35.305666    4178 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1128/.minikube/addons for local assets ...
	I0926 17:53:35.305765    4178 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1128/.minikube/files for local assets ...
	I0926 17:53:35.305947    4178 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> 16792.pem in /etc/ssl/certs
	I0926 17:53:35.305953    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> /etc/ssl/certs/16792.pem
	I0926 17:53:35.306171    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0926 17:53:35.314020    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem --> /etc/ssl/certs/16792.pem (1708 bytes)
	I0926 17:53:35.344643    4178 start.go:296] duration metric: took 83.349532ms for postStartSetup
	I0926 17:53:35.344681    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	I0926 17:53:35.344863    4178 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0926 17:53:35.344877    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:35.344965    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:35.345056    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:35.345137    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:35.345223    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/id_rsa Username:docker}
	I0926 17:53:35.381164    4178 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0926 17:53:35.381229    4178 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0926 17:53:35.414571    4178 fix.go:56] duration metric: took 15.654555871s for fixHost
	I0926 17:53:35.414597    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:35.414747    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:35.414839    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:35.414932    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:35.415022    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:35.415156    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:35.415295    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0926 17:53:35.415302    4178 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0926 17:53:35.472100    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727398415.586409353
	
	I0926 17:53:35.472129    4178 fix.go:216] guest clock: 1727398415.586409353
	I0926 17:53:35.472134    4178 fix.go:229] Guest: 2024-09-26 17:53:35.586409353 -0700 PDT Remote: 2024-09-26 17:53:35.414586 -0700 PDT m=+34.982399519 (delta=171.823353ms)
	I0926 17:53:35.472150    4178 fix.go:200] guest clock delta is within tolerance: 171.823353ms
	I0926 17:53:35.472153    4178 start.go:83] releasing machines lock for "ha-476000-m02", held for 15.712162695s
	I0926 17:53:35.472170    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	I0926 17:53:35.472305    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetIP
	I0926 17:53:35.513568    4178 out.go:177] * Found network options:
	I0926 17:53:35.535552    4178 out.go:177]   - NO_PROXY=192.169.0.5
	W0926 17:53:35.557416    4178 proxy.go:119] fail to check proxy env: Error ip not in block
	I0926 17:53:35.557455    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	I0926 17:53:35.558341    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	I0926 17:53:35.558579    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	I0926 17:53:35.558709    4178 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0926 17:53:35.558764    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	W0926 17:53:35.558835    4178 proxy.go:119] fail to check proxy env: Error ip not in block
	I0926 17:53:35.558964    4178 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0926 17:53:35.558985    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:35.559000    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:35.559215    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:35.559232    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:35.559433    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:35.559464    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:35.559662    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:35.559681    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/id_rsa Username:docker}
	I0926 17:53:35.559790    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/id_rsa Username:docker}
	W0926 17:53:35.596059    4178 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0926 17:53:35.596139    4178 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0926 17:53:35.610162    4178 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0926 17:53:35.610178    4178 start.go:495] detecting cgroup driver to use...
	I0926 17:53:35.610237    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 17:53:35.646709    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0926 17:53:35.656640    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0926 17:53:35.665578    4178 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0926 17:53:35.665623    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0926 17:53:35.674574    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 17:53:35.683489    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0926 17:53:35.692471    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 17:53:35.701275    4178 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0926 17:53:35.710401    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0926 17:53:35.719421    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0926 17:53:35.728448    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0926 17:53:35.738067    4178 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0926 17:53:35.746743    4178 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0926 17:53:35.746802    4178 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0926 17:53:35.755939    4178 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0926 17:53:35.763977    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:35.862563    4178 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0926 17:53:35.881531    4178 start.go:495] detecting cgroup driver to use...
	I0926 17:53:35.881616    4178 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0926 17:53:35.899471    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 17:53:35.910823    4178 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0926 17:53:35.923558    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 17:53:35.935946    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 17:53:35.946007    4178 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0926 17:53:35.969898    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 17:53:35.980115    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 17:53:35.995271    4178 ssh_runner.go:195] Run: which cri-dockerd
	I0926 17:53:35.998508    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0926 17:53:36.005810    4178 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0926 17:53:36.019492    4178 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0926 17:53:36.116976    4178 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0926 17:53:36.228090    4178 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0926 17:53:36.228117    4178 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0926 17:53:36.242164    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:36.335597    4178 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0926 17:53:38.678847    4178 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.343223137s)
	I0926 17:53:38.678917    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0926 17:53:38.689531    4178 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0926 17:53:38.702816    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0926 17:53:38.713151    4178 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0926 17:53:38.819068    4178 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0926 17:53:38.926667    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:39.040074    4178 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0926 17:53:39.054197    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0926 17:53:39.065256    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:39.163219    4178 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0926 17:53:39.228416    4178 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0926 17:53:39.228518    4178 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0926 17:53:39.233191    4178 start.go:563] Will wait 60s for crictl version
	I0926 17:53:39.233249    4178 ssh_runner.go:195] Run: which crictl
	I0926 17:53:39.236580    4178 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0926 17:53:39.262407    4178 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I0926 17:53:39.262495    4178 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0926 17:53:39.279010    4178 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0926 17:53:39.317905    4178 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I0926 17:53:39.359545    4178 out.go:177]   - env NO_PROXY=192.169.0.5
	I0926 17:53:39.381103    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetIP
	I0926 17:53:39.381320    4178 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0926 17:53:39.384579    4178 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 17:53:39.394395    4178 mustload.go:65] Loading cluster: ha-476000
	I0926 17:53:39.394560    4178 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:53:39.394810    4178 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:53:39.394834    4178 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:53:39.403482    4178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52086
	I0926 17:53:39.403823    4178 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:53:39.404150    4178 main.go:141] libmachine: Using API Version  1
	I0926 17:53:39.404164    4178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:53:39.404434    4178 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:53:39.404542    4178 main.go:141] libmachine: (ha-476000) Calling .GetState
	I0926 17:53:39.404632    4178 main.go:141] libmachine: (ha-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:53:39.404706    4178 main.go:141] libmachine: (ha-476000) DBG | hyperkit pid from json: 4191
	I0926 17:53:39.405678    4178 host.go:66] Checking if "ha-476000" exists ...
	I0926 17:53:39.405956    4178 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:53:39.405986    4178 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:53:39.414686    4178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52088
	I0926 17:53:39.415056    4178 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:53:39.415379    4178 main.go:141] libmachine: Using API Version  1
	I0926 17:53:39.415388    4178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:53:39.415605    4178 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:53:39.415728    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:39.415830    4178 certs.go:68] Setting up /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000 for IP: 192.169.0.6
	I0926 17:53:39.415836    4178 certs.go:194] generating shared ca certs ...
	I0926 17:53:39.415849    4178 certs.go:226] acquiring lock for ca certs: {Name:mk3d4665cb5756ae49ea6f7cd2a58d273aecc507 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:53:39.416032    4178 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.key
	I0926 17:53:39.416108    4178 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.key
	I0926 17:53:39.416119    4178 certs.go:256] generating profile certs ...
	I0926 17:53:39.416243    4178 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/client.key
	I0926 17:53:39.416331    4178 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key.462632c0
	I0926 17:53:39.416399    4178 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.key
	I0926 17:53:39.416406    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0926 17:53:39.416427    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0926 17:53:39.416446    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0926 17:53:39.416465    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0926 17:53:39.416482    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0926 17:53:39.416510    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0926 17:53:39.416544    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0926 17:53:39.416564    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0926 17:53:39.416666    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679.pem (1338 bytes)
	W0926 17:53:39.416716    4178 certs.go:480] ignoring /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679_empty.pem, impossibly tiny 0 bytes
	I0926 17:53:39.416725    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem (1675 bytes)
	I0926 17:53:39.416762    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem (1082 bytes)
	I0926 17:53:39.416795    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem (1123 bytes)
	I0926 17:53:39.416828    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem (1675 bytes)
	I0926 17:53:39.416893    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem (1708 bytes)
	I0926 17:53:39.416929    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:53:39.416949    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679.pem -> /usr/share/ca-certificates/1679.pem
	I0926 17:53:39.416967    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> /usr/share/ca-certificates/16792.pem
	I0926 17:53:39.416991    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:39.417078    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:39.417153    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:39.417237    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:39.417320    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/id_rsa Username:docker}
	I0926 17:53:39.447975    4178 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0926 17:53:39.451073    4178 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0926 17:53:39.458912    4178 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0926 17:53:39.462003    4178 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0926 17:53:39.470783    4178 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0926 17:53:39.473836    4178 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0926 17:53:39.481537    4178 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0926 17:53:39.484645    4178 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0926 17:53:39.492945    4178 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0926 17:53:39.495978    4178 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0926 17:53:39.503610    4178 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0926 17:53:39.506808    4178 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0926 17:53:39.514787    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0926 17:53:39.534891    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0926 17:53:39.554745    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0926 17:53:39.574668    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0926 17:53:39.594523    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0926 17:53:39.614131    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0926 17:53:39.633606    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0926 17:53:39.653376    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0926 17:53:39.673369    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0926 17:53:39.692952    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679.pem --> /usr/share/ca-certificates/1679.pem (1338 bytes)
	I0926 17:53:39.712634    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem --> /usr/share/ca-certificates/16792.pem (1708 bytes)
	I0926 17:53:39.732005    4178 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0926 17:53:39.745464    4178 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0926 17:53:39.759232    4178 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0926 17:53:39.772911    4178 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0926 17:53:39.786441    4178 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0926 17:53:39.800266    4178 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0926 17:53:39.813927    4178 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0926 17:53:39.827332    4178 ssh_runner.go:195] Run: openssl version
	I0926 17:53:39.831566    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16792.pem && ln -fs /usr/share/ca-certificates/16792.pem /etc/ssl/certs/16792.pem"
	I0926 17:53:39.839850    4178 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16792.pem
	I0926 17:53:39.843163    4178 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:36 /usr/share/ca-certificates/16792.pem
	I0926 17:53:39.843206    4178 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16792.pem
	I0926 17:53:39.847374    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16792.pem /etc/ssl/certs/3ec20f2e.0"
	I0926 17:53:39.855624    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0926 17:53:39.863965    4178 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:53:39.867400    4178 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:14 /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:53:39.867452    4178 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:53:39.871715    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0926 17:53:39.879907    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1679.pem && ln -fs /usr/share/ca-certificates/1679.pem /etc/ssl/certs/1679.pem"
	I0926 17:53:39.888247    4178 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1679.pem
	I0926 17:53:39.891606    4178 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:36 /usr/share/ca-certificates/1679.pem
	I0926 17:53:39.891654    4178 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1679.pem
	I0926 17:53:39.895855    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1679.pem /etc/ssl/certs/51391683.0"
	I0926 17:53:39.904043    4178 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0926 17:53:39.907450    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0926 17:53:39.911778    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0926 17:53:39.915909    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0926 17:53:39.920037    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0926 17:53:39.924167    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0926 17:53:39.928372    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0926 17:53:39.932543    4178 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.1 docker true true} ...
	I0926 17:53:39.932604    4178 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-476000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-476000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0926 17:53:39.932624    4178 kube-vip.go:115] generating kube-vip config ...
	I0926 17:53:39.932670    4178 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0926 17:53:39.944715    4178 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0926 17:53:39.944753    4178 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0926 17:53:39.944822    4178 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0926 17:53:39.953541    4178 binaries.go:44] Found k8s binaries, skipping transfer
	I0926 17:53:39.953597    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0926 17:53:39.961618    4178 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0926 17:53:39.975007    4178 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0926 17:53:39.988472    4178 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0926 17:53:40.002021    4178 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0926 17:53:40.004933    4178 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 17:53:40.015059    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:40.118867    4178 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 17:53:40.133377    4178 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 17:53:40.133568    4178 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:53:40.154757    4178 out.go:177] * Verifying Kubernetes components...
	I0926 17:53:40.196346    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:40.323445    4178 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 17:53:40.338817    4178 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19711-1128/kubeconfig
	I0926 17:53:40.339037    4178 kapi.go:59] client config for ha-476000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/client.key", CAFile:"/Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7459f00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0926 17:53:40.339084    4178 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0926 17:53:40.339280    4178 node_ready.go:35] waiting up to 6m0s for node "ha-476000-m02" to be "Ready" ...
	I0926 17:53:40.339354    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:40.339359    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:40.339366    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:40.339369    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:47.201921    4178 round_trippers.go:574] Response Status:  in 6862 milliseconds
	I0926 17:53:48.202681    4178 with_retry.go:234] Got a Retry-After 1s response for attempt 1 to https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:48.202709    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:48.202713    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:48.202720    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:48.202724    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:49.203128    4178 round_trippers.go:574] Response Status:  in 1000 milliseconds
	I0926 17:53:49.203194    4178 node_ready.go:53] error getting node "ha-476000-m02": Get "https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.1:52091->192.169.0.5:8443: read: connection reset by peer
	I0926 17:53:49.203240    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:49.203247    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:49.203252    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:49.203256    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:50.204478    4178 round_trippers.go:574] Response Status:  in 1001 milliseconds
	I0926 17:53:50.204619    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:50.204631    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:50.204642    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:50.204649    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:51.204974    4178 round_trippers.go:574] Response Status:  in 1000 milliseconds
	I0926 17:53:51.205045    4178 node_ready.go:53] error getting node "ha-476000-m02": Get "https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02": dial tcp 192.169.0.5:8443: connect: connection refused
	I0926 17:53:51.205098    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:51.205108    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:51.205118    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:51.205124    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:52.205352    4178 round_trippers.go:574] Response Status:  in 1000 milliseconds
	I0926 17:53:52.205474    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:52.205485    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:52.205496    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:52.205505    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:53.206703    4178 round_trippers.go:574] Response Status:  in 1001 milliseconds
	I0926 17:53:53.206766    4178 node_ready.go:53] error getting node "ha-476000-m02": Get "https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02": dial tcp 192.169.0.5:8443: connect: connection refused
	I0926 17:53:53.206822    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:53.206831    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:53.206843    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:53.206849    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:54.208032    4178 round_trippers.go:574] Response Status:  in 1001 milliseconds
	I0926 17:53:54.208160    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:54.208172    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:54.208183    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:54.208190    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:55.208420    4178 round_trippers.go:574] Response Status:  in 1000 milliseconds
	I0926 17:53:55.208484    4178 node_ready.go:53] error getting node "ha-476000-m02": Get "https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02": dial tcp 192.169.0.5:8443: connect: connection refused
	I0926 17:53:55.208561    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:55.208572    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:55.208582    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:55.208586    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:56.209388    4178 round_trippers.go:574] Response Status:  in 1000 milliseconds
	I0926 17:53:56.209496    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:56.209507    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:56.209517    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:56.209529    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:57.211492    4178 round_trippers.go:574] Response Status:  in 1001 milliseconds
	I0926 17:53:57.211560    4178 node_ready.go:53] error getting node "ha-476000-m02": Get "https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02": dial tcp 192.169.0.5:8443: connect: connection refused
	I0926 17:53:57.211643    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:57.211654    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:57.211665    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:57.211671    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:58.213441    4178 round_trippers.go:574] Response Status:  in 1001 milliseconds
	I0926 17:53:58.213520    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:58.213528    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:58.213535    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:58.213538    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:59.215627    4178 round_trippers.go:574] Response Status:  in 1002 milliseconds
	I0926 17:53:59.215689    4178 node_ready.go:53] error getting node "ha-476000-m02": Get "https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02": dial tcp 192.169.0.5:8443: connect: connection refused
	I0926 17:53:59.215761    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:59.215770    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:59.215781    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:59.215792    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:00.214970    4178 round_trippers.go:574] Response Status:  in 999 milliseconds
	I0926 17:54:00.215057    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:00.215066    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:00.215072    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:00.215075    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:02.766651    4178 round_trippers.go:574] Response Status: 200 OK in 2551 milliseconds
	I0926 17:54:02.767320    4178 node_ready.go:53] node "ha-476000-m02" has status "Ready":"False"
	I0926 17:54:02.767364    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:02.767371    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:02.767378    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:02.767382    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:02.808455    4178 round_trippers.go:574] Response Status: 200 OK in 41 milliseconds
	I0926 17:54:02.839499    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:02.839515    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:02.839522    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:02.839524    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:02.844502    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:03.339950    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:03.339974    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:03.340014    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:03.340033    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:03.343931    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:03.839836    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:03.839849    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:03.839855    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:03.839859    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:03.842811    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:04.340378    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:04.340403    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:04.340414    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:04.340421    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:04.344418    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:04.839736    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:04.839752    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:04.839758    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:04.839762    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:04.842629    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:04.843116    4178 node_ready.go:49] node "ha-476000-m02" has status "Ready":"True"
	I0926 17:54:04.843129    4178 node_ready.go:38] duration metric: took 24.503742617s for node "ha-476000-m02" to be "Ready" ...
	I0926 17:54:04.843136    4178 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0926 17:54:04.843170    4178 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0926 17:54:04.843178    4178 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0926 17:54:04.843227    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0926 17:54:04.843232    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:04.843238    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:04.843242    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:04.851447    4178 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0926 17:54:04.858185    4178 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:04.858238    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:04.858243    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:04.858250    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:04.858254    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:04.860121    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:04.860597    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:04.860608    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:04.860614    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:04.860619    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:04.862704    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:05.358322    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:05.358334    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:05.358341    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:05.358344    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:05.361386    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:05.361939    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:05.361947    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:05.361954    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:05.361958    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:05.366335    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:05.858443    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:05.858462    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:05.858485    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:05.858489    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:05.861181    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:05.861691    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:05.861698    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:05.861704    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:05.861706    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:05.863911    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:06.359311    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:06.359342    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:06.359350    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:06.359354    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:06.362329    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:06.362841    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:06.362848    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:06.362854    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:06.362864    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:06.365951    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:06.860115    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:06.860140    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:06.860152    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:06.860192    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:06.863829    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:06.864356    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:06.864364    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:06.864370    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:06.864372    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:06.866293    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:06.866641    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:07.359755    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:07.359781    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:07.359791    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:07.359796    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:07.362929    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:07.363432    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:07.363440    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:07.363449    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:07.363454    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:07.365354    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:07.859403    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:07.859428    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:07.859440    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:07.859447    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:07.863936    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:07.864482    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:07.864489    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:07.864494    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:07.864497    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:07.866695    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:08.359070    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:08.359095    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:08.359104    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:08.359110    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:08.363413    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:08.363975    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:08.363983    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:08.363989    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:08.363996    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:08.366160    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:08.858562    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:08.858596    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:08.858604    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:08.858608    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:08.861584    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:08.862306    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:08.862313    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:08.862319    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:08.862329    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:08.864555    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:09.359666    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:09.359694    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:09.359706    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:09.359710    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:09.364444    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:09.364796    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:09.364802    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:09.364808    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:09.364812    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:09.367017    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:09.367391    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:09.859578    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:09.859628    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:09.859645    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:09.859654    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:09.863289    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:09.863926    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:09.863934    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:09.863940    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:09.863942    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:09.865998    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:10.358368    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:10.358385    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:10.358391    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:10.358396    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:10.366195    4178 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0926 17:54:10.366734    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:10.366743    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:10.366752    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:10.366755    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:10.369544    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:10.859656    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:10.859683    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:10.859694    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:10.859701    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:10.864043    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:10.864491    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:10.864499    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:10.864504    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:10.864508    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:10.866558    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:11.360000    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:11.360026    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:11.360038    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:11.360045    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:11.364064    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:11.364604    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:11.364611    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:11.364617    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:11.364620    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:11.366561    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:11.859988    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:11.860011    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:11.860023    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:11.860028    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:11.863780    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:11.864488    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:11.864496    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:11.864502    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:11.864505    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:11.866527    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:11.866879    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:12.359231    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:12.359302    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:12.359317    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:12.359325    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:12.363142    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:12.363807    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:12.363815    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:12.363820    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:12.363823    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:12.365720    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:12.859295    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:12.859321    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:12.859332    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:12.859336    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:12.863604    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:12.864232    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:12.864243    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:12.864249    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:12.864252    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:12.866340    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:13.360473    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:13.360500    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:13.360511    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:13.360516    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:13.364925    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:13.365659    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:13.365667    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:13.365672    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:13.365677    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:13.367805    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:13.858451    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:13.858477    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:13.858490    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:13.858495    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:13.862381    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:13.862921    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:13.862929    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:13.862934    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:13.862938    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:13.864941    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:14.358942    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:14.358966    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:14.359005    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:14.359013    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:14.365723    4178 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0926 17:54:14.366181    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:14.366189    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:14.366193    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:14.366197    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:14.368552    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:14.368954    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:14.860475    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:14.860501    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:14.860543    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:14.860550    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:14.864207    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:14.864620    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:14.864628    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:14.864634    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:14.864637    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:14.866896    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:15.358734    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:15.358751    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:15.358757    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:15.358761    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:15.361477    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:15.362047    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:15.362056    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:15.362062    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:15.362072    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:15.364404    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:15.859641    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:15.859669    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:15.859681    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:15.859690    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:15.864301    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:15.864755    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:15.864762    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:15.864767    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:15.864771    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:15.866941    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:16.358689    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:16.358713    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:16.358762    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:16.358771    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:16.363038    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:16.363637    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:16.363644    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:16.363649    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:16.363665    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:16.365580    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:16.858829    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:16.858848    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:16.858857    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:16.858864    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:16.861418    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:16.861895    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:16.861903    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:16.861908    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:16.861913    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:16.864330    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:16.864660    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:17.358538    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:17.358557    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:17.358576    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:17.358581    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:17.361634    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:17.362216    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:17.362224    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:17.362230    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:17.362235    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:17.364368    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:17.858951    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:17.859025    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:17.859068    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:17.859083    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:17.863132    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:17.863643    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:17.863651    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:17.863660    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:17.863665    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:17.865816    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:18.358377    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:18.358396    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:18.358403    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:18.358429    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:18.364859    4178 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0926 17:54:18.365288    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:18.365296    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:18.365303    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:18.365306    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:18.367423    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:18.859211    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:18.859237    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:18.859250    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:18.859257    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:18.863321    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:18.863832    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:18.863840    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:18.863846    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:18.863849    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:18.865860    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:18.866261    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:19.358438    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:19.358453    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:19.358460    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:19.358463    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:19.361068    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:19.361685    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:19.361694    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:19.361700    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:19.361703    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:19.364079    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:19.859935    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:19.859961    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:19.859972    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:19.859979    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:19.864189    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:19.864623    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:19.864630    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:19.864638    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:19.864641    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:19.866680    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:20.359100    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:20.359154    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:20.359164    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:20.359169    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:20.362081    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:20.362587    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:20.362595    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:20.362601    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:20.362604    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:20.364581    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:20.860535    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:20.860561    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:20.860573    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:20.860581    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:20.864595    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:20.865051    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:20.865063    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:20.865070    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:20.865074    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:20.866939    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:20.867377    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:21.358839    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:21.358864    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:21.358910    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:21.358919    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:21.362304    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:21.362899    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:21.362907    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:21.362913    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:21.362923    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:21.364904    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:21.859198    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:21.859224    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:21.859235    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:21.859244    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:21.863464    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:21.863902    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:21.863911    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:21.863916    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:21.863920    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:21.866008    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:22.358500    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:22.358557    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:22.358567    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:22.358581    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:22.363039    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:22.363487    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:22.363495    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:22.363501    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:22.363504    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:22.365560    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:22.860486    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:22.860511    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:22.860523    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:22.860549    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:22.865059    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:22.865691    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:22.865699    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:22.865705    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:22.865708    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:22.867780    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:22.868136    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:23.358997    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:23.359023    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:23.359035    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:23.359043    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:23.363268    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:23.363930    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:23.363938    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:23.363944    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:23.363948    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:23.365982    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:23.858407    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:23.858421    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:23.858452    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:23.858457    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:23.861385    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:23.861801    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:23.861812    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:23.861818    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:23.861823    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:23.864061    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:24.360526    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:24.360553    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:24.360565    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:24.360571    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:24.364721    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:24.365349    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:24.365356    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:24.365362    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:24.365365    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:24.367430    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:24.858605    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:24.858630    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:24.858641    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:24.858648    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:24.862472    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:24.863003    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:24.863010    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:24.863016    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:24.863018    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:24.864908    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:25.358639    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:25.358664    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:25.358677    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:25.358684    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:25.362945    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:25.363487    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:25.363495    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:25.363501    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:25.363503    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:25.365691    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:25.366062    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:25.859315    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:25.859333    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:25.859341    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:25.859364    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:25.862801    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:25.863276    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:25.863284    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:25.863289    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:25.863293    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:25.865685    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:26.359001    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:26.359015    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:26.359021    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:26.359025    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:26.361573    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:26.362094    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:26.362101    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:26.362107    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:26.362111    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:26.364144    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:26.858599    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:26.858625    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:26.858637    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:26.858644    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:26.862247    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:26.862753    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:26.862762    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:26.862767    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:26.862771    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:26.864571    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:27.358862    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:27.358888    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:27.358899    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:27.358904    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:27.363109    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:27.363648    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:27.363657    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:27.363663    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:27.363669    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:27.365500    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:27.859752    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:27.859779    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:27.859790    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:27.859795    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:27.864255    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:27.864725    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:27.864733    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:27.864738    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:27.864741    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:27.866764    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:27.867055    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:28.359808    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:28.359835    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:28.359882    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:28.359890    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:28.363146    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:28.363572    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:28.363579    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:28.363585    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:28.363589    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:28.365498    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:28.858708    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:28.858734    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:28.858746    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:28.858752    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:28.862673    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:28.863231    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:28.863238    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:28.863244    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:28.863248    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:28.865181    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:29.359611    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:29.359640    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:29.359653    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:29.359660    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:29.362965    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:29.363411    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:29.363419    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:29.363425    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:29.363427    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:29.365174    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:29.859384    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:29.859402    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:29.859409    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:29.859414    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:29.862499    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:29.863033    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:29.863041    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:29.863047    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:29.863050    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:29.865154    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:30.359191    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:30.359209    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:30.359255    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:30.359265    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:30.361836    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:30.362303    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:30.362312    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:30.362317    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:30.362320    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:30.364567    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:30.364980    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:30.860033    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:30.860066    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:30.860101    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:30.860109    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:30.864359    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:30.864782    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:30.864790    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:30.864799    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:30.864805    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:30.866798    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:31.358678    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:31.358711    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:31.358762    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:31.358772    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:31.363329    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:31.363731    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:31.363739    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:31.363745    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:31.363751    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:31.365894    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:31.858683    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:31.858706    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:31.858718    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:31.858724    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:31.862717    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:31.863254    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:31.863262    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:31.863268    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:31.863272    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:31.865220    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:32.359370    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:32.359420    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:32.359434    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:32.359442    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:32.362904    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:32.363502    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:32.363510    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:32.363516    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:32.363518    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:32.365729    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:32.366016    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:32.859955    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:32.859990    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:32.859997    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:32.860001    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:32.874510    4178 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0926 17:54:32.875130    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:32.875137    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:32.875142    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:32.875145    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:32.883403    4178 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0926 17:54:33.359964    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:33.360006    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:33.360019    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:33.360025    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:33.362527    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:33.362934    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:33.362942    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:33.362948    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:33.362953    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:33.365277    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:33.860043    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:33.860070    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:33.860082    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:33.860089    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:33.864487    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:33.864960    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:33.864968    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:33.864974    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:33.864978    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:33.866813    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:34.359408    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:34.359422    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:34.359453    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:34.359457    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:34.361843    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:34.362407    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:34.362415    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:34.362419    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:34.362427    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:34.364587    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:34.859087    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:34.859113    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:34.859124    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:34.859132    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:34.863123    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:34.863508    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:34.863516    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:34.863522    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:34.863525    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:34.865516    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:34.865853    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:35.359972    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:35.359997    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:35.360039    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:35.360048    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:35.364311    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:35.364957    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:35.364964    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:35.364970    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:35.364974    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:35.367232    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:35.859251    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:35.859265    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:35.859271    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:35.859275    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:35.861746    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:35.862292    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:35.862304    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:35.862318    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:35.862323    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:35.864289    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:36.360234    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:36.360274    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:36.360284    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:36.360291    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:36.363297    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:36.363726    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:36.363734    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:36.363740    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:36.363743    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:36.365689    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:36.859037    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:36.859105    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:36.859119    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:36.859130    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:36.863205    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:36.863621    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:36.863629    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:36.863635    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:36.863638    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:36.865642    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:36.865933    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:37.359101    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:37.359127    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:37.359139    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:37.359145    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:37.363256    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:37.363851    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:37.363859    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:37.363865    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:37.363868    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:37.365908    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:37.859282    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:37.859308    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:37.859319    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:37.859325    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:37.863341    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:37.863718    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:37.863726    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:37.863731    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:37.863735    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:37.865672    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:38.359013    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:38.359055    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:38.359065    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:38.359070    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:38.361936    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:38.362521    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:38.362529    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:38.362534    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:38.362538    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:38.364699    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:38.859426    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:38.859453    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:38.859466    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:38.859475    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:38.863509    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:38.864012    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:38.864020    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:38.864025    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:38.864029    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:38.866259    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:38.866728    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:39.358730    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:39.358748    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:39.358756    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:39.358765    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:39.362410    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:39.362956    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:39.362964    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:39.362970    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:39.362979    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:39.365004    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:39.858564    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:39.858584    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:39.858592    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:39.858598    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:39.861794    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:39.862200    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:39.862208    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:39.862214    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:39.862219    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:39.864175    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:40.358549    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:40.358586    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.358596    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.358600    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.361533    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:40.362003    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:40.362011    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.362017    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.362020    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.364141    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:40.860048    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:40.860077    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.860087    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.860093    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.863900    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:40.864305    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:40.864314    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.864320    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.864322    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.866266    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:40.866599    4178 pod_ready.go:93] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:40.866610    4178 pod_ready.go:82] duration metric: took 36.008276067s for pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:40.866616    4178 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7jwgv" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:40.866646    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-7jwgv
	I0926 17:54:40.866651    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.866657    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.866661    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.868466    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:40.868930    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:40.868938    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.868944    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.868948    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.870736    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:40.871103    4178 pod_ready.go:93] pod "coredns-7c65d6cfc9-7jwgv" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:40.871111    4178 pod_ready.go:82] duration metric: took 4.489575ms for pod "coredns-7c65d6cfc9-7jwgv" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:40.871118    4178 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-476000" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:40.871146    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-476000
	I0926 17:54:40.871150    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.871156    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.871160    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.873206    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:40.873700    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:40.873707    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.873713    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.873717    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.875461    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:40.875829    4178 pod_ready.go:93] pod "etcd-ha-476000" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:40.875837    4178 pod_ready.go:82] duration metric: took 4.713943ms for pod "etcd-ha-476000" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:40.875844    4178 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-476000-m02" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:40.875875    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-476000-m02
	I0926 17:54:40.875880    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.875885    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.875890    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.877741    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:40.878137    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:40.878145    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.878151    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.878155    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.880023    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:40.880375    4178 pod_ready.go:93] pod "etcd-ha-476000-m02" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:40.880384    4178 pod_ready.go:82] duration metric: took 4.534554ms for pod "etcd-ha-476000-m02" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:40.880390    4178 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-476000-m03" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:40.880419    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-476000-m03
	I0926 17:54:40.880424    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.880429    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.880433    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.882094    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:40.882474    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m03
	I0926 17:54:40.882481    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.882486    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.882496    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.884251    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:40.884613    4178 pod_ready.go:98] node "ha-476000-m03" hosting pod "etcd-ha-476000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:40.884622    4178 pod_ready.go:82] duration metric: took 4.227661ms for pod "etcd-ha-476000-m03" in "kube-system" namespace to be "Ready" ...
	E0926 17:54:40.884628    4178 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-476000-m03" hosting pod "etcd-ha-476000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:40.884638    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-476000" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:41.061560    4178 request.go:632] Waited for 176.87189ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-476000
	I0926 17:54:41.061616    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-476000
	I0926 17:54:41.061655    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:41.061670    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:41.061677    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:41.065303    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:41.262138    4178 request.go:632] Waited for 196.341694ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:41.262261    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:41.262270    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:41.262282    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:41.262290    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:41.266333    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:41.266689    4178 pod_ready.go:93] pod "kube-apiserver-ha-476000" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:41.266699    4178 pod_ready.go:82] duration metric: took 382.053003ms for pod "kube-apiserver-ha-476000" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:41.266705    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-476000-m02" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:41.460472    4178 request.go:632] Waited for 193.723597ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-476000-m02
	I0926 17:54:41.460525    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-476000-m02
	I0926 17:54:41.460535    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:41.460578    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:41.460588    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:41.464471    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:41.661359    4178 request.go:632] Waited for 196.505849ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:41.661462    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:41.661475    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:41.661486    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:41.661494    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:41.665427    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:41.665770    4178 pod_ready.go:93] pod "kube-apiserver-ha-476000-m02" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:41.665780    4178 pod_ready.go:82] duration metric: took 399.068092ms for pod "kube-apiserver-ha-476000-m02" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:41.665789    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-476000-m03" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:41.861535    4178 request.go:632] Waited for 195.701622ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-476000-m03
	I0926 17:54:41.861634    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-476000-m03
	I0926 17:54:41.861648    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:41.861668    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:41.861680    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:41.865792    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:42.061777    4178 request.go:632] Waited for 195.542882ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000-m03
	I0926 17:54:42.061836    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m03
	I0926 17:54:42.061869    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:42.061880    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:42.061888    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:42.066352    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:42.066752    4178 pod_ready.go:98] node "ha-476000-m03" hosting pod "kube-apiserver-ha-476000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:42.066763    4178 pod_ready.go:82] duration metric: took 400.967857ms for pod "kube-apiserver-ha-476000-m03" in "kube-system" namespace to be "Ready" ...
	E0926 17:54:42.066770    4178 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-476000-m03" hosting pod "kube-apiserver-ha-476000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:42.066774    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-476000" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:42.260909    4178 request.go:632] Waited for 194.055971ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-476000
	I0926 17:54:42.260962    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-476000
	I0926 17:54:42.261001    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:42.261021    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:42.261031    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:42.264905    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:42.460758    4178 request.go:632] Waited for 195.327303ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:42.460808    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:42.460816    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:42.460827    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:42.460837    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:42.464434    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:42.464776    4178 pod_ready.go:93] pod "kube-controller-manager-ha-476000" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:42.464786    4178 pod_ready.go:82] duration metric: took 398.004555ms for pod "kube-controller-manager-ha-476000" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:42.464793    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-476000-m02" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:42.660316    4178 request.go:632] Waited for 195.46211ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-476000-m02
	I0926 17:54:42.660458    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-476000-m02
	I0926 17:54:42.660474    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:42.660486    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:42.660494    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:42.665327    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:42.860122    4178 request.go:632] Waited for 194.468161ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:42.860201    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:42.860211    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:42.860222    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:42.860231    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:42.864049    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:42.864456    4178 pod_ready.go:93] pod "kube-controller-manager-ha-476000-m02" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:42.864465    4178 pod_ready.go:82] duration metric: took 399.6655ms for pod "kube-controller-manager-ha-476000-m02" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:42.864473    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-476000-m03" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:43.060814    4178 request.go:632] Waited for 196.258122ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-476000-m03
	I0926 17:54:43.060925    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-476000-m03
	I0926 17:54:43.060935    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:43.060947    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:43.060956    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:43.065088    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:43.261824    4178 request.go:632] Waited for 196.351744ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000-m03
	I0926 17:54:43.261944    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m03
	I0926 17:54:43.261957    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:43.261967    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:43.261984    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:43.266272    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:43.266738    4178 pod_ready.go:98] node "ha-476000-m03" hosting pod "kube-controller-manager-ha-476000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:43.266748    4178 pod_ready.go:82] duration metric: took 402.268136ms for pod "kube-controller-manager-ha-476000-m03" in "kube-system" namespace to be "Ready" ...
	E0926 17:54:43.266762    4178 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-476000-m03" hosting pod "kube-controller-manager-ha-476000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:43.266768    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5d8nb" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:43.460501    4178 request.go:632] Waited for 193.687301ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5d8nb
	I0926 17:54:43.460615    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5d8nb
	I0926 17:54:43.460627    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:43.460639    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:43.460647    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:43.463846    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:43.662152    4178 request.go:632] Waited for 197.799796ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000-m04
	I0926 17:54:43.662296    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m04
	I0926 17:54:43.662312    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:43.662324    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:43.662334    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:43.666430    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:43.666928    4178 pod_ready.go:98] node "ha-476000-m04" hosting pod "kube-proxy-5d8nb" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m04" has status "Ready":"Unknown"
	I0926 17:54:43.666940    4178 pod_ready.go:82] duration metric: took 400.16396ms for pod "kube-proxy-5d8nb" in "kube-system" namespace to be "Ready" ...
	E0926 17:54:43.666946    4178 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-476000-m04" hosting pod "kube-proxy-5d8nb" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m04" has status "Ready":"Unknown"
	I0926 17:54:43.666950    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bpsqv" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:43.860782    4178 request.go:632] Waited for 193.758415ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bpsqv
	I0926 17:54:43.860851    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bpsqv
	I0926 17:54:43.860893    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:43.860905    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:43.860912    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:43.865061    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:44.060850    4178 request.go:632] Waited for 195.218122ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000-m03
	I0926 17:54:44.060902    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m03
	I0926 17:54:44.060920    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:44.060968    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:44.060976    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:44.065008    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:44.065426    4178 pod_ready.go:98] node "ha-476000-m03" hosting pod "kube-proxy-bpsqv" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:44.065437    4178 pod_ready.go:82] duration metric: took 398.480723ms for pod "kube-proxy-bpsqv" in "kube-system" namespace to be "Ready" ...
	E0926 17:54:44.065443    4178 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-476000-m03" hosting pod "kube-proxy-bpsqv" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:44.065448    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ctdh4" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:44.260264    4178 request.go:632] Waited for 194.757329ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ctdh4
	I0926 17:54:44.260395    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ctdh4
	I0926 17:54:44.260404    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:44.260417    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:44.260424    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:44.264668    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:44.461295    4178 request.go:632] Waited for 196.119983ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:44.461373    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:44.461384    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:44.461399    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:44.461407    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:44.465035    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:44.465397    4178 pod_ready.go:93] pod "kube-proxy-ctdh4" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:44.465406    4178 pod_ready.go:82] duration metric: took 399.951689ms for pod "kube-proxy-ctdh4" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:44.465413    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nrsx7" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:44.660616    4178 request.go:632] Waited for 195.1575ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrsx7
	I0926 17:54:44.660704    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrsx7
	I0926 17:54:44.660715    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:44.660726    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:44.660734    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:44.664476    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:44.860447    4178 request.go:632] Waited for 195.571151ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:44.860565    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:44.860578    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:44.860588    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:44.860596    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:44.864038    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:44.864554    4178 pod_ready.go:93] pod "kube-proxy-nrsx7" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:44.864566    4178 pod_ready.go:82] duration metric: took 399.145507ms for pod "kube-proxy-nrsx7" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:44.864575    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-476000" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:45.060924    4178 request.go:632] Waited for 196.301993ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-476000
	I0926 17:54:45.061011    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-476000
	I0926 17:54:45.061022    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:45.061034    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:45.061042    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:45.065277    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:45.260098    4178 request.go:632] Waited for 194.412657ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:45.260187    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:45.260208    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:45.260220    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:45.260229    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:45.264296    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:45.264558    4178 pod_ready.go:93] pod "kube-scheduler-ha-476000" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:45.264567    4178 pod_ready.go:82] duration metric: took 399.984402ms for pod "kube-scheduler-ha-476000" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:45.264574    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-476000-m02" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:45.460204    4178 request.go:632] Waited for 195.586272ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-476000-m02
	I0926 17:54:45.460285    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-476000-m02
	I0926 17:54:45.460296    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:45.460307    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:45.460315    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:45.463717    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:45.661528    4178 request.go:632] Waited for 197.284014ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:45.661624    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:45.661634    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:45.661645    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:45.661653    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:45.666080    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:45.666323    4178 pod_ready.go:93] pod "kube-scheduler-ha-476000-m02" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:45.666333    4178 pod_ready.go:82] duration metric: took 401.752851ms for pod "kube-scheduler-ha-476000-m02" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:45.666340    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-476000-m03" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:45.860703    4178 request.go:632] Waited for 194.311899ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-476000-m03
	I0926 17:54:45.860736    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-476000-m03
	I0926 17:54:45.860740    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:45.860746    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:45.860750    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:45.863521    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:46.061792    4178 request.go:632] Waited for 197.829608ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000-m03
	I0926 17:54:46.061901    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m03
	I0926 17:54:46.061915    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:46.061926    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:46.061934    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:46.065839    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:46.066244    4178 pod_ready.go:98] node "ha-476000-m03" hosting pod "kube-scheduler-ha-476000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:46.066255    4178 pod_ready.go:82] duration metric: took 399.908641ms for pod "kube-scheduler-ha-476000-m03" in "kube-system" namespace to be "Ready" ...
	E0926 17:54:46.066262    4178 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-476000-m03" hosting pod "kube-scheduler-ha-476000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:46.066267    4178 pod_ready.go:39] duration metric: took 41.222971189s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0926 17:54:46.066282    4178 api_server.go:52] waiting for apiserver process to appear ...
	I0926 17:54:46.066375    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 17:54:46.079414    4178 logs.go:276] 2 containers: [7aabe646a9fe 3fae3e334c04]
	I0926 17:54:46.079513    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 17:54:46.092379    4178 logs.go:276] 2 containers: [f525de12dc97 bcf266ec7564]
	I0926 17:54:46.092476    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 17:54:46.105011    4178 logs.go:276] 0 containers: []
	W0926 17:54:46.105025    4178 logs.go:278] No container was found matching "coredns"
	I0926 17:54:46.105107    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 17:54:46.118452    4178 logs.go:276] 2 containers: [d9fc15fd82f1 d8ed18933791]
	I0926 17:54:46.118550    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 17:54:46.132316    4178 logs.go:276] 2 containers: [b52376c9cdcd a5a9a7e18064]
	I0926 17:54:46.132402    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 17:54:46.145649    4178 logs.go:276] 2 containers: [350a07550ad1 f82271bde6b0]
	I0926 17:54:46.145746    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 17:54:46.160399    4178 logs.go:276] 2 containers: [b73a2fda347d 981336ad66c6]
	I0926 17:54:46.160426    4178 logs.go:123] Gathering logs for kube-apiserver [7aabe646a9fe] ...
	I0926 17:54:46.160432    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aabe646a9fe"
	I0926 17:54:46.180676    4178 logs.go:123] Gathering logs for kube-apiserver [3fae3e334c04] ...
	I0926 17:54:46.180690    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fae3e334c04"
	I0926 17:54:46.213941    4178 logs.go:123] Gathering logs for kindnet [981336ad66c6] ...
	I0926 17:54:46.213956    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 981336ad66c6"
	I0926 17:54:46.229008    4178 logs.go:123] Gathering logs for container status ...
	I0926 17:54:46.229022    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 17:54:46.263727    4178 logs.go:123] Gathering logs for dmesg ...
	I0926 17:54:46.263743    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 17:54:46.275216    4178 logs.go:123] Gathering logs for etcd [f525de12dc97] ...
	I0926 17:54:46.275229    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f525de12dc97"
	I0926 17:54:46.340546    4178 logs.go:123] Gathering logs for etcd [bcf266ec7564] ...
	I0926 17:54:46.340563    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcf266ec7564"
	I0926 17:54:46.368786    4178 logs.go:123] Gathering logs for kube-scheduler [d8ed18933791] ...
	I0926 17:54:46.368802    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ed18933791"
	I0926 17:54:46.392911    4178 logs.go:123] Gathering logs for kube-proxy [a5a9a7e18064] ...
	I0926 17:54:46.392926    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5a9a7e18064"
	I0926 17:54:46.411685    4178 logs.go:123] Gathering logs for kubelet ...
	I0926 17:54:46.411700    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 17:54:46.453572    4178 logs.go:123] Gathering logs for describe nodes ...
	I0926 17:54:46.453588    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 17:54:46.819319    4178 logs.go:123] Gathering logs for kube-scheduler [d9fc15fd82f1] ...
	I0926 17:54:46.819338    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9fc15fd82f1"
	I0926 17:54:46.834299    4178 logs.go:123] Gathering logs for kube-proxy [b52376c9cdcd] ...
	I0926 17:54:46.834315    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b52376c9cdcd"
	I0926 17:54:46.850264    4178 logs.go:123] Gathering logs for Docker ...
	I0926 17:54:46.850278    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 17:54:46.881220    4178 logs.go:123] Gathering logs for kube-controller-manager [350a07550ad1] ...
	I0926 17:54:46.881233    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 350a07550ad1"
	I0926 17:54:46.915123    4178 logs.go:123] Gathering logs for kube-controller-manager [f82271bde6b0] ...
	I0926 17:54:46.915139    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f82271bde6b0"
	I0926 17:54:46.943154    4178 logs.go:123] Gathering logs for kindnet [b73a2fda347d] ...
	I0926 17:54:46.943169    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b73a2fda347d"
	I0926 17:54:49.459929    4178 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 17:54:49.472910    4178 api_server.go:72] duration metric: took 1m9.339247453s to wait for apiserver process to appear ...
	I0926 17:54:49.472923    4178 api_server.go:88] waiting for apiserver healthz status ...
	I0926 17:54:49.473016    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 17:54:49.489783    4178 logs.go:276] 2 containers: [7aabe646a9fe 3fae3e334c04]
	I0926 17:54:49.489876    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 17:54:49.503069    4178 logs.go:276] 2 containers: [f525de12dc97 bcf266ec7564]
	I0926 17:54:49.503157    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 17:54:49.514340    4178 logs.go:276] 0 containers: []
	W0926 17:54:49.514353    4178 logs.go:278] No container was found matching "coredns"
	I0926 17:54:49.514430    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 17:54:49.528690    4178 logs.go:276] 2 containers: [d9fc15fd82f1 d8ed18933791]
	I0926 17:54:49.528782    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 17:54:49.540774    4178 logs.go:276] 2 containers: [b52376c9cdcd a5a9a7e18064]
	I0926 17:54:49.540870    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 17:54:49.553605    4178 logs.go:276] 2 containers: [350a07550ad1 f82271bde6b0]
	I0926 17:54:49.553693    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 17:54:49.566939    4178 logs.go:276] 2 containers: [b73a2fda347d 981336ad66c6]
	I0926 17:54:49.566961    4178 logs.go:123] Gathering logs for kindnet [981336ad66c6] ...
	I0926 17:54:49.566967    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 981336ad66c6"
	I0926 17:54:49.584163    4178 logs.go:123] Gathering logs for etcd [bcf266ec7564] ...
	I0926 17:54:49.584179    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcf266ec7564"
	I0926 17:54:49.608092    4178 logs.go:123] Gathering logs for kube-controller-manager [f82271bde6b0] ...
	I0926 17:54:49.608107    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f82271bde6b0"
	I0926 17:54:49.640526    4178 logs.go:123] Gathering logs for etcd [f525de12dc97] ...
	I0926 17:54:49.640542    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f525de12dc97"
	I0926 17:54:49.707920    4178 logs.go:123] Gathering logs for kube-scheduler [d9fc15fd82f1] ...
	I0926 17:54:49.707937    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9fc15fd82f1"
	I0926 17:54:49.725537    4178 logs.go:123] Gathering logs for kube-proxy [b52376c9cdcd] ...
	I0926 17:54:49.725551    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b52376c9cdcd"
	I0926 17:54:49.747118    4178 logs.go:123] Gathering logs for kube-proxy [a5a9a7e18064] ...
	I0926 17:54:49.747134    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5a9a7e18064"
	I0926 17:54:49.763059    4178 logs.go:123] Gathering logs for kindnet [b73a2fda347d] ...
	I0926 17:54:49.763073    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b73a2fda347d"
	I0926 17:54:49.780606    4178 logs.go:123] Gathering logs for container status ...
	I0926 17:54:49.780619    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 17:54:49.815474    4178 logs.go:123] Gathering logs for kubelet ...
	I0926 17:54:49.815490    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 17:54:49.856341    4178 logs.go:123] Gathering logs for kube-apiserver [3fae3e334c04] ...
	I0926 17:54:49.856359    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fae3e334c04"
	I0926 17:54:49.895001    4178 logs.go:123] Gathering logs for kube-apiserver [7aabe646a9fe] ...
	I0926 17:54:49.895016    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aabe646a9fe"
	I0926 17:54:49.915291    4178 logs.go:123] Gathering logs for kube-scheduler [d8ed18933791] ...
	I0926 17:54:49.915307    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ed18933791"
	I0926 17:54:49.931682    4178 logs.go:123] Gathering logs for kube-controller-manager [350a07550ad1] ...
	I0926 17:54:49.931698    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 350a07550ad1"
	I0926 17:54:49.962905    4178 logs.go:123] Gathering logs for Docker ...
	I0926 17:54:49.962920    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 17:54:49.995739    4178 logs.go:123] Gathering logs for dmesg ...
	I0926 17:54:49.995756    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 17:54:50.006748    4178 logs.go:123] Gathering logs for describe nodes ...
	I0926 17:54:50.006764    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 17:54:52.683223    4178 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0926 17:54:52.688111    4178 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0926 17:54:52.688148    4178 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0926 17:54:52.688152    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:52.688158    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:52.688162    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:52.688774    4178 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0926 17:54:52.688866    4178 api_server.go:141] control plane version: v1.31.1
	I0926 17:54:52.688877    4178 api_server.go:131] duration metric: took 3.215937625s to wait for apiserver health ...
	I0926 17:54:52.688882    4178 system_pods.go:43] waiting for kube-system pods to appear ...
	I0926 17:54:52.688964    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 17:54:52.702208    4178 logs.go:276] 2 containers: [7aabe646a9fe 3fae3e334c04]
	I0926 17:54:52.702296    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 17:54:52.716057    4178 logs.go:276] 2 containers: [f525de12dc97 bcf266ec7564]
	I0926 17:54:52.716146    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 17:54:52.730288    4178 logs.go:276] 0 containers: []
	W0926 17:54:52.730303    4178 logs.go:278] No container was found matching "coredns"
	I0926 17:54:52.730387    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 17:54:52.744133    4178 logs.go:276] 2 containers: [d9fc15fd82f1 d8ed18933791]
	I0926 17:54:52.744229    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 17:54:52.757357    4178 logs.go:276] 2 containers: [b52376c9cdcd a5a9a7e18064]
	I0926 17:54:52.757447    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 17:54:52.770397    4178 logs.go:276] 2 containers: [350a07550ad1 f82271bde6b0]
	I0926 17:54:52.770488    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 17:54:52.783588    4178 logs.go:276] 2 containers: [b73a2fda347d 981336ad66c6]
	I0926 17:54:52.783609    4178 logs.go:123] Gathering logs for dmesg ...
	I0926 17:54:52.783615    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 17:54:52.794149    4178 logs.go:123] Gathering logs for kube-scheduler [d9fc15fd82f1] ...
	I0926 17:54:52.794162    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9fc15fd82f1"
	I0926 17:54:52.810239    4178 logs.go:123] Gathering logs for kube-proxy [a5a9a7e18064] ...
	I0926 17:54:52.810253    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5a9a7e18064"
	I0926 17:54:52.828364    4178 logs.go:123] Gathering logs for kube-controller-manager [350a07550ad1] ...
	I0926 17:54:52.828379    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 350a07550ad1"
	I0926 17:54:52.859712    4178 logs.go:123] Gathering logs for kindnet [b73a2fda347d] ...
	I0926 17:54:52.859726    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b73a2fda347d"
	I0926 17:54:52.877881    4178 logs.go:123] Gathering logs for container status ...
	I0926 17:54:52.877898    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 17:54:52.920788    4178 logs.go:123] Gathering logs for kube-scheduler [d8ed18933791] ...
	I0926 17:54:52.920802    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ed18933791"
	I0926 17:54:52.937686    4178 logs.go:123] Gathering logs for Docker ...
	I0926 17:54:52.937700    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 17:54:52.970435    4178 logs.go:123] Gathering logs for kubelet ...
	I0926 17:54:52.970449    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 17:54:53.015652    4178 logs.go:123] Gathering logs for describe nodes ...
	I0926 17:54:53.015669    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 17:54:53.184377    4178 logs.go:123] Gathering logs for etcd [f525de12dc97] ...
	I0926 17:54:53.184391    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f525de12dc97"
	I0926 17:54:53.249067    4178 logs.go:123] Gathering logs for etcd [bcf266ec7564] ...
	I0926 17:54:53.249083    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcf266ec7564"
	I0926 17:54:53.274003    4178 logs.go:123] Gathering logs for kube-controller-manager [f82271bde6b0] ...
	I0926 17:54:53.274019    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f82271bde6b0"
	I0926 17:54:53.300047    4178 logs.go:123] Gathering logs for kube-apiserver [7aabe646a9fe] ...
	I0926 17:54:53.300062    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aabe646a9fe"
	I0926 17:54:53.321481    4178 logs.go:123] Gathering logs for kube-apiserver [3fae3e334c04] ...
	I0926 17:54:53.321495    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fae3e334c04"
	I0926 17:54:53.356023    4178 logs.go:123] Gathering logs for kube-proxy [b52376c9cdcd] ...
	I0926 17:54:53.356038    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b52376c9cdcd"
	I0926 17:54:53.374219    4178 logs.go:123] Gathering logs for kindnet [981336ad66c6] ...
	I0926 17:54:53.374233    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 981336ad66c6"
	I0926 17:54:55.893460    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0926 17:54:55.893486    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:55.893529    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:55.893539    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:55.899854    4178 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0926 17:54:55.904904    4178 system_pods.go:59] 26 kube-system pods found
	I0926 17:54:55.904920    4178 system_pods.go:61] "coredns-7c65d6cfc9-44l9n" [8009053f-fc43-43ba-a44e-fd1d53e88617] Running
	I0926 17:54:55.904925    4178 system_pods.go:61] "coredns-7c65d6cfc9-7jwgv" [20fb38a0-b993-41b3-9c91-98802d75ef47] Running
	I0926 17:54:55.904928    4178 system_pods.go:61] "etcd-ha-476000" [469cfe32-fa29-4816-afd7-3f115a3a6c67] Running
	I0926 17:54:55.904930    4178 system_pods.go:61] "etcd-ha-476000-m02" [d4c3ec0c-a151-4047-9a30-93cae10d24a0] Running
	I0926 17:54:55.904933    4178 system_pods.go:61] "etcd-ha-476000-m03" [e9166a94-af57-49ec-91a3-dc0c36083ca4] Running
	I0926 17:54:55.904936    4178 system_pods.go:61] "kindnet-44vxl" [488a3806-d7c1-4397-bff8-00d9ea3cb984] Running
	I0926 17:54:55.904938    4178 system_pods.go:61] "kindnet-4pnxr" [23b10d5b-fd63-4284-9c44-413b8bf80354] Running
	I0926 17:54:55.904941    4178 system_pods.go:61] "kindnet-hhrtc" [7ac457e3-ae85-4314-99dc-da37f5875807] Running
	I0926 17:54:55.904943    4178 system_pods.go:61] "kindnet-lgj66" [63fc5d71-c403-40d7-85a7-6ff48e307a79] Running
	I0926 17:54:55.904946    4178 system_pods.go:61] "kube-apiserver-ha-476000" [5c5e9fb4-0cda-4892-bae5-e51e839d2573] Running
	I0926 17:54:55.904948    4178 system_pods.go:61] "kube-apiserver-ha-476000-m02" [2d6d0355-152b-4be4-b810-f1c80c9ef24f] Running
	I0926 17:54:55.904951    4178 system_pods.go:61] "kube-apiserver-ha-476000-m03" [3ce79eb3-635c-4632-82e0-7240a7d49c50] Running
	I0926 17:54:55.904954    4178 system_pods.go:61] "kube-controller-manager-ha-476000" [9edef370-886e-4260-a3d3-99be564d90c6] Running
	I0926 17:54:55.904957    4178 system_pods.go:61] "kube-controller-manager-ha-476000-m02" [5eeed951-eaba-436f-9f80-8f68cb50425d] Running
	I0926 17:54:55.904960    4178 system_pods.go:61] "kube-controller-manager-ha-476000-m03" [e60f68a5-a353-4501-81c1-ac7d76820373] Running
	I0926 17:54:55.904962    4178 system_pods.go:61] "kube-proxy-5d8nb" [1e9c0178-2d58-4ca1-9fbe-c8e54d91bf1a] Running
	I0926 17:54:55.904965    4178 system_pods.go:61] "kube-proxy-bpsqv" [c264b112-c037-4332-9c47-2561ecb928f1] Running
	I0926 17:54:55.904967    4178 system_pods.go:61] "kube-proxy-ctdh4" [9a21a19a-5cad-4eb9-97f3-4c654fbbe59b] Running
	I0926 17:54:55.904970    4178 system_pods.go:61] "kube-proxy-nrsx7" [14b031d0-b044-4500-8cc1-1397c83c1886] Running
	I0926 17:54:55.904973    4178 system_pods.go:61] "kube-scheduler-ha-476000" [ef0e308c-1b28-413c-846e-24935489434d] Running
	I0926 17:54:55.904976    4178 system_pods.go:61] "kube-scheduler-ha-476000-m02" [2b75a7bc-a19c-4aeb-8521-10bd963cacbb] Running
	I0926 17:54:55.904978    4178 system_pods.go:61] "kube-scheduler-ha-476000-m03" [206cbf3f-bff1-4164-a622-3be7593c08ac] Running
	I0926 17:54:55.904981    4178 system_pods.go:61] "kube-vip-ha-476000" [376a9128-fe3c-41aa-b26c-da921ae20e68] Running
	I0926 17:54:55.904997    4178 system_pods.go:61] "kube-vip-ha-476000-m02" [7e907e74-7f15-4c04-b463-682a14766e66] Running
	I0926 17:54:55.905002    4178 system_pods.go:61] "kube-vip-ha-476000-m03" [bf2e3b8c-cf42-45b0-a4d7-a32b820bab21] Running
	I0926 17:54:55.905005    4178 system_pods.go:61] "storage-provisioner" [e3e367a7-6cda-4177-a81d-7897333308d7] Running
	I0926 17:54:55.905009    4178 system_pods.go:74] duration metric: took 3.216111125s to wait for pod list to return data ...
	I0926 17:54:55.905015    4178 default_sa.go:34] waiting for default service account to be created ...
	I0926 17:54:55.905062    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0926 17:54:55.905068    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:55.905073    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:55.905077    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:55.907842    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:55.908016    4178 default_sa.go:45] found service account: "default"
	I0926 17:54:55.908026    4178 default_sa.go:55] duration metric: took 3.006211ms for default service account to be created ...
	I0926 17:54:55.908031    4178 system_pods.go:116] waiting for k8s-apps to be running ...
	I0926 17:54:55.908061    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0926 17:54:55.908066    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:55.908071    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:55.908075    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:55.912026    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:55.917054    4178 system_pods.go:86] 26 kube-system pods found
	I0926 17:54:55.917066    4178 system_pods.go:89] "coredns-7c65d6cfc9-44l9n" [8009053f-fc43-43ba-a44e-fd1d53e88617] Running
	I0926 17:54:55.917070    4178 system_pods.go:89] "coredns-7c65d6cfc9-7jwgv" [20fb38a0-b993-41b3-9c91-98802d75ef47] Running
	I0926 17:54:55.917073    4178 system_pods.go:89] "etcd-ha-476000" [469cfe32-fa29-4816-afd7-3f115a3a6c67] Running
	I0926 17:54:55.917076    4178 system_pods.go:89] "etcd-ha-476000-m02" [d4c3ec0c-a151-4047-9a30-93cae10d24a0] Running
	I0926 17:54:55.917080    4178 system_pods.go:89] "etcd-ha-476000-m03" [e9166a94-af57-49ec-91a3-dc0c36083ca4] Running
	I0926 17:54:55.917083    4178 system_pods.go:89] "kindnet-44vxl" [488a3806-d7c1-4397-bff8-00d9ea3cb984] Running
	I0926 17:54:55.917085    4178 system_pods.go:89] "kindnet-4pnxr" [23b10d5b-fd63-4284-9c44-413b8bf80354] Running
	I0926 17:54:55.917088    4178 system_pods.go:89] "kindnet-hhrtc" [7ac457e3-ae85-4314-99dc-da37f5875807] Running
	I0926 17:54:55.917091    4178 system_pods.go:89] "kindnet-lgj66" [63fc5d71-c403-40d7-85a7-6ff48e307a79] Running
	I0926 17:54:55.917094    4178 system_pods.go:89] "kube-apiserver-ha-476000" [5c5e9fb4-0cda-4892-bae5-e51e839d2573] Running
	I0926 17:54:55.917097    4178 system_pods.go:89] "kube-apiserver-ha-476000-m02" [2d6d0355-152b-4be4-b810-f1c80c9ef24f] Running
	I0926 17:54:55.917100    4178 system_pods.go:89] "kube-apiserver-ha-476000-m03" [3ce79eb3-635c-4632-82e0-7240a7d49c50] Running
	I0926 17:54:55.917103    4178 system_pods.go:89] "kube-controller-manager-ha-476000" [9edef370-886e-4260-a3d3-99be564d90c6] Running
	I0926 17:54:55.917106    4178 system_pods.go:89] "kube-controller-manager-ha-476000-m02" [5eeed951-eaba-436f-9f80-8f68cb50425d] Running
	I0926 17:54:55.917110    4178 system_pods.go:89] "kube-controller-manager-ha-476000-m03" [e60f68a5-a353-4501-81c1-ac7d76820373] Running
	I0926 17:54:55.917113    4178 system_pods.go:89] "kube-proxy-5d8nb" [1e9c0178-2d58-4ca1-9fbe-c8e54d91bf1a] Running
	I0926 17:54:55.917116    4178 system_pods.go:89] "kube-proxy-bpsqv" [c264b112-c037-4332-9c47-2561ecb928f1] Running
	I0926 17:54:55.917123    4178 system_pods.go:89] "kube-proxy-ctdh4" [9a21a19a-5cad-4eb9-97f3-4c654fbbe59b] Running
	I0926 17:54:55.917126    4178 system_pods.go:89] "kube-proxy-nrsx7" [14b031d0-b044-4500-8cc1-1397c83c1886] Running
	I0926 17:54:55.917129    4178 system_pods.go:89] "kube-scheduler-ha-476000" [ef0e308c-1b28-413c-846e-24935489434d] Running
	I0926 17:54:55.917132    4178 system_pods.go:89] "kube-scheduler-ha-476000-m02" [2b75a7bc-a19c-4aeb-8521-10bd963cacbb] Running
	I0926 17:54:55.917135    4178 system_pods.go:89] "kube-scheduler-ha-476000-m03" [206cbf3f-bff1-4164-a622-3be7593c08ac] Running
	I0926 17:54:55.917138    4178 system_pods.go:89] "kube-vip-ha-476000" [376a9128-fe3c-41aa-b26c-da921ae20e68] Running
	I0926 17:54:55.917140    4178 system_pods.go:89] "kube-vip-ha-476000-m02" [7e907e74-7f15-4c04-b463-682a14766e66] Running
	I0926 17:54:55.917144    4178 system_pods.go:89] "kube-vip-ha-476000-m03" [bf2e3b8c-cf42-45b0-a4d7-a32b820bab21] Running
	I0926 17:54:55.917146    4178 system_pods.go:89] "storage-provisioner" [e3e367a7-6cda-4177-a81d-7897333308d7] Running
	I0926 17:54:55.917151    4178 system_pods.go:126] duration metric: took 9.116472ms to wait for k8s-apps to be running ...
	I0926 17:54:55.917160    4178 system_svc.go:44] waiting for kubelet service to be running ....
	I0926 17:54:55.917225    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 17:54:55.928854    4178 system_svc.go:56] duration metric: took 11.69353ms WaitForService to wait for kubelet
	I0926 17:54:55.928867    4178 kubeadm.go:582] duration metric: took 1m15.795183486s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 17:54:55.928878    4178 node_conditions.go:102] verifying NodePressure condition ...
	I0926 17:54:55.928918    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0926 17:54:55.928924    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:55.928930    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:55.928933    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:55.932146    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:55.933143    4178 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0926 17:54:55.933159    4178 node_conditions.go:123] node cpu capacity is 2
	I0926 17:54:55.933173    4178 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0926 17:54:55.933176    4178 node_conditions.go:123] node cpu capacity is 2
	I0926 17:54:55.933181    4178 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0926 17:54:55.933183    4178 node_conditions.go:123] node cpu capacity is 2
	I0926 17:54:55.933186    4178 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0926 17:54:55.933190    4178 node_conditions.go:123] node cpu capacity is 2
	I0926 17:54:55.933193    4178 node_conditions.go:105] duration metric: took 4.311525ms to run NodePressure ...
	I0926 17:54:55.933202    4178 start.go:241] waiting for startup goroutines ...
	I0926 17:54:55.933219    4178 start.go:255] writing updated cluster config ...
	I0926 17:54:55.954947    4178 out.go:201] 
	I0926 17:54:55.975717    4178 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:54:55.975787    4178 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/config.json ...
	I0926 17:54:55.997338    4178 out.go:177] * Starting "ha-476000-m03" control-plane node in "ha-476000" cluster
	I0926 17:54:56.055744    4178 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 17:54:56.055778    4178 cache.go:56] Caching tarball of preloaded images
	I0926 17:54:56.056007    4178 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0926 17:54:56.056029    4178 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 17:54:56.056173    4178 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/config.json ...
	I0926 17:54:56.057121    4178 start.go:360] acquireMachinesLock for ha-476000-m03: {Name:mk62b0ec9b6170d1971239573141a625c4a2621e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 17:54:56.057290    4178 start.go:364] duration metric: took 139.967µs to acquireMachinesLock for "ha-476000-m03"
	I0926 17:54:56.057321    4178 start.go:96] Skipping create...Using existing machine configuration
	I0926 17:54:56.057331    4178 fix.go:54] fixHost starting: m03
	I0926 17:54:56.057738    4178 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:54:56.057766    4178 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:54:56.066973    4178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52106
	I0926 17:54:56.067348    4178 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:54:56.067691    4178 main.go:141] libmachine: Using API Version  1
	I0926 17:54:56.067705    4178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:54:56.067918    4178 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:54:56.068036    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	I0926 17:54:56.068122    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetState
	I0926 17:54:56.068201    4178 main.go:141] libmachine: (ha-476000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:54:56.068289    4178 main.go:141] libmachine: (ha-476000-m03) DBG | hyperkit pid from json: 3537
	I0926 17:54:56.069219    4178 main.go:141] libmachine: (ha-476000-m03) DBG | hyperkit pid 3537 missing from process table
	I0926 17:54:56.069237    4178 fix.go:112] recreateIfNeeded on ha-476000-m03: state=Stopped err=<nil>
	I0926 17:54:56.069245    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	W0926 17:54:56.069331    4178 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 17:54:56.090482    4178 out.go:177] * Restarting existing hyperkit VM for "ha-476000-m03" ...
	I0926 17:54:56.132629    4178 main.go:141] libmachine: (ha-476000-m03) Calling .Start
	I0926 17:54:56.132887    4178 main.go:141] libmachine: (ha-476000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:54:56.132957    4178 main.go:141] libmachine: (ha-476000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/hyperkit.pid
	I0926 17:54:56.134746    4178 main.go:141] libmachine: (ha-476000-m03) DBG | hyperkit pid 3537 missing from process table
	I0926 17:54:56.134764    4178 main.go:141] libmachine: (ha-476000-m03) DBG | pid 3537 is in state "Stopped"
	I0926 17:54:56.134782    4178 main.go:141] libmachine: (ha-476000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/hyperkit.pid...
	I0926 17:54:56.135225    4178 main.go:141] libmachine: (ha-476000-m03) DBG | Using UUID 91a51069-a363-4c64-acd8-a07fa14dbb0d
	I0926 17:54:56.162007    4178 main.go:141] libmachine: (ha-476000-m03) DBG | Generated MAC 66:6f:5a:2d:e2:16
	I0926 17:54:56.162027    4178 main.go:141] libmachine: (ha-476000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000
	I0926 17:54:56.162143    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"91a51069-a363-4c64-acd8-a07fa14dbb0d", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003accc0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0926 17:54:56.162181    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"91a51069-a363-4c64-acd8-a07fa14dbb0d", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003accc0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0926 17:54:56.162253    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "91a51069-a363-4c64-acd8-a07fa14dbb0d", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/ha-476000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/bzimage,/Users/jenkins/minikube-integration/19711-1128/.minikube/machine
s/ha-476000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000"}
	I0926 17:54:56.162300    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 91a51069-a363-4c64-acd8-a07fa14dbb0d -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/ha-476000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/bzimage,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000"
	I0926 17:54:56.162312    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0926 17:54:56.163637    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 DEBUG: hyperkit: Pid is 4226
	I0926 17:54:56.164043    4178 main.go:141] libmachine: (ha-476000-m03) DBG | Attempt 0
	I0926 17:54:56.164071    4178 main.go:141] libmachine: (ha-476000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:54:56.164140    4178 main.go:141] libmachine: (ha-476000-m03) DBG | hyperkit pid from json: 4226
	I0926 17:54:56.166126    4178 main.go:141] libmachine: (ha-476000-m03) DBG | Searching for 66:6f:5a:2d:e2:16 in /var/db/dhcpd_leases ...
	I0926 17:54:56.166206    4178 main.go:141] libmachine: (ha-476000-m03) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0926 17:54:56.166235    4178 main.go:141] libmachine: (ha-476000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 17:54:56.166254    4178 main.go:141] libmachine: (ha-476000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 17:54:56.166288    4178 main.go:141] libmachine: (ha-476000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 17:54:56.166308    4178 main.go:141] libmachine: (ha-476000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f7515c}
	I0926 17:54:56.166318    4178 main.go:141] libmachine: (ha-476000-m03) DBG | Found match: 66:6f:5a:2d:e2:16
	I0926 17:54:56.166327    4178 main.go:141] libmachine: (ha-476000-m03) DBG | IP: 192.169.0.7
	I0926 17:54:56.166332    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetConfigRaw
	I0926 17:54:56.166976    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetIP
	I0926 17:54:56.167202    4178 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/config.json ...
	I0926 17:54:56.167675    4178 machine.go:93] provisionDockerMachine start ...
	I0926 17:54:56.167686    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	I0926 17:54:56.167814    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:54:56.167961    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:54:56.168088    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:54:56.168207    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:54:56.168321    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:54:56.168450    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:54:56.168613    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0926 17:54:56.168622    4178 main.go:141] libmachine: About to run SSH command:
	hostname
	I0926 17:54:56.172038    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I0926 17:54:56.180188    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0926 17:54:56.181229    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 17:54:56.181258    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 17:54:56.181274    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 17:54:56.181290    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 17:54:56.563523    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0926 17:54:56.563541    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0926 17:54:56.678338    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 17:54:56.678355    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 17:54:56.678363    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 17:54:56.678373    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 17:54:56.679203    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0926 17:54:56.679212    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0926 17:55:02.300815    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:55:02 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0926 17:55:02.300833    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:55:02 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0926 17:55:02.300855    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:55:02 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0926 17:55:02.325228    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:55:02 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0926 17:55:31.235618    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0926 17:55:31.235633    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetMachineName
	I0926 17:55:31.235773    4178 buildroot.go:166] provisioning hostname "ha-476000-m03"
	I0926 17:55:31.235783    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetMachineName
	I0926 17:55:31.235886    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:31.235992    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:31.236097    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.236189    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.236274    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:31.236414    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:55:31.236550    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0926 17:55:31.236559    4178 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-476000-m03 && echo "ha-476000-m03" | sudo tee /etc/hostname
	I0926 17:55:31.305642    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-476000-m03
	
	I0926 17:55:31.305657    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:31.305790    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:31.305908    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.306006    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.306089    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:31.306235    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:55:31.306383    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0926 17:55:31.306394    4178 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-476000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-476000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-476000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0926 17:55:31.369873    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 17:55:31.369889    4178 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19711-1128/.minikube CaCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19711-1128/.minikube}
	I0926 17:55:31.369903    4178 buildroot.go:174] setting up certificates
	I0926 17:55:31.369909    4178 provision.go:84] configureAuth start
	I0926 17:55:31.369916    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetMachineName
	I0926 17:55:31.370048    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetIP
	I0926 17:55:31.370147    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:31.370234    4178 provision.go:143] copyHostCerts
	I0926 17:55:31.370268    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem
	I0926 17:55:31.370317    4178 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem, removing ...
	I0926 17:55:31.370322    4178 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem
	I0926 17:55:31.370451    4178 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem (1123 bytes)
	I0926 17:55:31.370647    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem
	I0926 17:55:31.370676    4178 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem, removing ...
	I0926 17:55:31.370680    4178 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem
	I0926 17:55:31.370748    4178 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem (1675 bytes)
	I0926 17:55:31.370903    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem
	I0926 17:55:31.370932    4178 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem, removing ...
	I0926 17:55:31.370937    4178 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem
	I0926 17:55:31.371006    4178 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem (1082 bytes)
	I0926 17:55:31.371150    4178 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem org=jenkins.ha-476000-m03 san=[127.0.0.1 192.169.0.7 ha-476000-m03 localhost minikube]
	I0926 17:55:31.544988    4178 provision.go:177] copyRemoteCerts
	I0926 17:55:31.545045    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0926 17:55:31.545059    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:31.545196    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:31.545298    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.545402    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:31.545491    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/id_rsa Username:docker}
	I0926 17:55:31.580851    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0926 17:55:31.580928    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0926 17:55:31.601357    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0926 17:55:31.601440    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0926 17:55:31.621840    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0926 17:55:31.621921    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0926 17:55:31.641722    4178 provision.go:87] duration metric: took 271.803372ms to configureAuth
	I0926 17:55:31.641736    4178 buildroot.go:189] setting minikube options for container-runtime
	I0926 17:55:31.641909    4178 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:55:31.641923    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	I0926 17:55:31.642055    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:31.642148    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:31.642236    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.642329    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.642416    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:31.642531    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:55:31.642652    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0926 17:55:31.642659    4178 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0926 17:55:31.699187    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0926 17:55:31.699200    4178 buildroot.go:70] root file system type: tmpfs
	I0926 17:55:31.699283    4178 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0926 17:55:31.699296    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:31.699424    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:31.699525    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.699630    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.699725    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:31.699863    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:55:31.700007    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0926 17:55:31.700056    4178 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0926 17:55:31.769790    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0926 17:55:31.769808    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:31.769942    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:31.770041    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.770127    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.770216    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:31.770341    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:55:31.770484    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0926 17:55:31.770496    4178 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0926 17:55:33.400017    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0926 17:55:33.400032    4178 machine.go:96] duration metric: took 37.232210795s to provisionDockerMachine
	I0926 17:55:33.400040    4178 start.go:293] postStartSetup for "ha-476000-m03" (driver="hyperkit")
	I0926 17:55:33.400054    4178 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0926 17:55:33.400067    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	I0926 17:55:33.400257    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0926 17:55:33.400271    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:33.400365    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:33.400451    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:33.400540    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:33.400615    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/id_rsa Username:docker}
	I0926 17:55:33.437533    4178 ssh_runner.go:195] Run: cat /etc/os-release
	I0926 17:55:33.440663    4178 info.go:137] Remote host: Buildroot 2023.02.9
	I0926 17:55:33.440673    4178 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1128/.minikube/addons for local assets ...
	I0926 17:55:33.440763    4178 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1128/.minikube/files for local assets ...
	I0926 17:55:33.440901    4178 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> 16792.pem in /etc/ssl/certs
	I0926 17:55:33.440910    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> /etc/ssl/certs/16792.pem
	I0926 17:55:33.441066    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0926 17:55:33.449179    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem --> /etc/ssl/certs/16792.pem (1708 bytes)
	I0926 17:55:33.469328    4178 start.go:296] duration metric: took 69.278399ms for postStartSetup
	I0926 17:55:33.469350    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	I0926 17:55:33.469543    4178 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0926 17:55:33.469556    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:33.469645    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:33.469723    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:33.469812    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:33.469885    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/id_rsa Username:docker}
	I0926 17:55:33.505216    4178 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0926 17:55:33.505294    4178 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0926 17:55:33.540120    4178 fix.go:56] duration metric: took 37.482649135s for fixHost
	I0926 17:55:33.540150    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:33.540287    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:33.540382    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:33.540461    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:33.540540    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:33.540677    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:55:33.540816    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0926 17:55:33.540823    4178 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0926 17:55:33.598810    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727398533.714160628
	
	I0926 17:55:33.598825    4178 fix.go:216] guest clock: 1727398533.714160628
	I0926 17:55:33.598831    4178 fix.go:229] Guest: 2024-09-26 17:55:33.714160628 -0700 PDT Remote: 2024-09-26 17:55:33.540136 -0700 PDT m=+153.107512249 (delta=174.024628ms)
	I0926 17:55:33.598841    4178 fix.go:200] guest clock delta is within tolerance: 174.024628ms
	I0926 17:55:33.598846    4178 start.go:83] releasing machines lock for "ha-476000-m03", held for 37.541403544s
	I0926 17:55:33.598861    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	I0926 17:55:33.598984    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetIP
	I0926 17:55:33.620720    4178 out.go:177] * Found network options:
	I0926 17:55:33.640782    4178 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0926 17:55:33.662722    4178 proxy.go:119] fail to check proxy env: Error ip not in block
	W0926 17:55:33.662755    4178 proxy.go:119] fail to check proxy env: Error ip not in block
	I0926 17:55:33.662789    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	I0926 17:55:33.663752    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	I0926 17:55:33.664030    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	I0926 17:55:33.664220    4178 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0926 17:55:33.664265    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	W0926 17:55:33.664303    4178 proxy.go:119] fail to check proxy env: Error ip not in block
	W0926 17:55:33.664331    4178 proxy.go:119] fail to check proxy env: Error ip not in block
	I0926 17:55:33.664429    4178 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0926 17:55:33.664449    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:33.664488    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:33.664703    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:33.664719    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:33.664903    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:33.664932    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:33.665066    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/id_rsa Username:docker}
	I0926 17:55:33.665091    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:33.665207    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/id_rsa Username:docker}
	W0926 17:55:33.697895    4178 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0926 17:55:33.697966    4178 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0926 17:55:33.748934    4178 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0926 17:55:33.748959    4178 start.go:495] detecting cgroup driver to use...
	I0926 17:55:33.749065    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 17:55:33.765581    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0926 17:55:33.775502    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0926 17:55:33.785025    4178 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0926 17:55:33.785083    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0926 17:55:33.794919    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 17:55:33.804605    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0926 17:55:33.814324    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 17:55:33.824237    4178 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0926 17:55:33.832956    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0926 17:55:33.841773    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0926 17:55:33.851179    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0926 17:55:33.860818    4178 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0926 17:55:33.869929    4178 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0926 17:55:33.870002    4178 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0926 17:55:33.880612    4178 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0926 17:55:33.888804    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:55:33.989453    4178 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0926 17:55:34.008589    4178 start.go:495] detecting cgroup driver to use...
	I0926 17:55:34.008666    4178 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0926 17:55:34.033408    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 17:55:34.045976    4178 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0926 17:55:34.061768    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 17:55:34.072236    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 17:55:34.082936    4178 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0926 17:55:34.101453    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 17:55:34.111855    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 17:55:34.126151    4178 ssh_runner.go:195] Run: which cri-dockerd
	I0926 17:55:34.129207    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0926 17:55:34.136448    4178 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0926 17:55:34.149966    4178 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0926 17:55:34.247760    4178 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0926 17:55:34.364359    4178 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0926 17:55:34.364382    4178 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0926 17:55:34.380269    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:55:34.475811    4178 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0926 17:56:35.519197    4178 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.04314195s)
	I0926 17:56:35.519276    4178 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0926 17:56:35.552893    4178 out.go:201] 
	W0926 17:56:35.574257    4178 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 27 00:55:31 ha-476000-m03 systemd[1]: Starting Docker Application Container Engine...
	Sep 27 00:55:31 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:31.500016553Z" level=info msg="Starting up"
	Sep 27 00:55:31 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:31.500635723Z" level=info msg="containerd not running, starting managed containerd"
	Sep 27 00:55:31 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:31.501585462Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=510
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.515859502Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.530811327Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.530896497Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.530963742Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.530999016Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.531160593Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.531211393Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.531353040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.531394128Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.531431029Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.531461249Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.531611451Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.531854923Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.533401951Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.533446517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.533570107Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.533614884Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.533785548Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.533833312Z" level=info msg="metadata content store policy set" policy=shared
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537372044Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537425387Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537458961Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537519539Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537555242Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537622818Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537842730Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537922428Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537957588Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537987448Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538017362Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538049217Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538078685Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538107984Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538137843Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538167077Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538198997Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538230397Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538266484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538296944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538326105Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538358875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538390741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538420029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538458203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538495889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538528790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538561681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538590379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538618723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538647795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538678724Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538713636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538743343Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538771404Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538879453Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538923135Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538973990Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.539015313Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.539070453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.539103724Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.539133731Z" level=info msg="NRI interface is disabled by configuration."
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.539314481Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.539398768Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.539457208Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.539540620Z" level=info msg="containerd successfully booted in 0.024310s"
	Sep 27 00:55:32 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:32.523809928Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 27 00:55:32 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:32.557923590Z" level=info msg="Loading containers: start."
	Sep 27 00:55:32 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:32.687864975Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 27 00:55:32 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:32.754261548Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 27 00:55:33 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:33.488464069Z" level=info msg="Loading containers: done."
	Sep 27 00:55:33 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:33.495297411Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Sep 27 00:55:33 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:33.495333206Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Sep 27 00:55:33 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:33.495348892Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Sep 27 00:55:33 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:33.495450205Z" level=info msg="Daemon has completed initialization"
	Sep 27 00:55:33 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:33.514076327Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 27 00:55:33 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:33.514159018Z" level=info msg="API listen on [::]:2376"
	Sep 27 00:55:33 ha-476000-m03 systemd[1]: Started Docker Application Container Engine.
	Sep 27 00:55:34 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:34.603579868Z" level=info msg="Processing signal 'terminated'"
	Sep 27 00:55:34 ha-476000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Sep 27 00:55:34 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:34.604826953Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 27 00:55:34 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:34.605154827Z" level=info msg="Daemon shutdown complete"
	Sep 27 00:55:34 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:34.605194895Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 27 00:55:34 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:34.605243671Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 27 00:55:35 ha-476000-m03 systemd[1]: docker.service: Deactivated successfully.
	Sep 27 00:55:35 ha-476000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Sep 27 00:55:35 ha-476000-m03 systemd[1]: Starting Docker Application Container Engine...
	Sep 27 00:55:35 ha-476000-m03 dockerd[1093]: time="2024-09-27T00:55:35.644572631Z" level=info msg="Starting up"
	Sep 27 00:56:35 ha-476000-m03 dockerd[1093]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 27 00:56:35 ha-476000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 27 00:56:35 ha-476000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 27 00:56:35 ha-476000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0926 17:56:35.574334    4178 out.go:270] * 
	W0926 17:56:35.575462    4178 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 17:56:35.658842    4178 out.go:201] 
	
	
	==> Docker <==
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.206048904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.206179384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 00:54:01 ha-476000 cri-dockerd[1422]: time="2024-09-27T00:54:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ded079a0572139d8da280864d2cf23e26a7a74761427fdb6aa8247bf1b618b63/resolv.conf as [nameserver 192.169.0.1]"
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.465946902Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.465995187Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.466006348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.466074171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 00:54:01 ha-476000 cri-dockerd[1422]: time="2024-09-27T00:54:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ef132416f65d445e2be52f1f35d402e4103f11df5abe57373ffacf06538460a2/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 27 00:54:01 ha-476000 cri-dockerd[1422]: time="2024-09-27T00:54:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/82fb727d3b4ab9beb6771fe42b02b13cfa819ec6e94565fc85eb5e44849131dc/resolv.conf as [nameserver 192.169.0.1]"
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.953799067Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.953836835Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.953845219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.953903701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.967774874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.968202742Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.968237276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.968864557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 00:54:32 ha-476000 dockerd[1165]: time="2024-09-27T00:54:32.331720830Z" level=info msg="ignoring event" container=182d3576c4be84b985fdcac8be52d4bc1daa03061cca1c9af1ea1719cc87ef93 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:54:32 ha-476000 dockerd[1171]: time="2024-09-27T00:54:32.332359122Z" level=info msg="shim disconnected" id=182d3576c4be84b985fdcac8be52d4bc1daa03061cca1c9af1ea1719cc87ef93 namespace=moby
	Sep 27 00:54:32 ha-476000 dockerd[1171]: time="2024-09-27T00:54:32.332548493Z" level=warning msg="cleaning up after shim disconnected" id=182d3576c4be84b985fdcac8be52d4bc1daa03061cca1c9af1ea1719cc87ef93 namespace=moby
	Sep 27 00:54:32 ha-476000 dockerd[1171]: time="2024-09-27T00:54:32.332589783Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 27 00:54:47 ha-476000 dockerd[1171]: time="2024-09-27T00:54:47.288497270Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 27 00:54:47 ha-476000 dockerd[1171]: time="2024-09-27T00:54:47.289077983Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 27 00:54:47 ha-476000 dockerd[1171]: time="2024-09-27T00:54:47.289196082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 00:54:47 ha-476000 dockerd[1171]: time="2024-09-27T00:54:47.289608100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	b05b1fc6dccd2       6e38f40d628db                                                                                         About a minute ago   Running             storage-provisioner       2                   82fb727d3b4ab       storage-provisioner
	182d3576c4be8       6e38f40d628db                                                                                         2 minutes ago        Exited              storage-provisioner       1                   82fb727d3b4ab       storage-provisioner
	1e068209398d4       8c811b4aec35f                                                                                         2 minutes ago        Running             busybox                   1                   ef132416f65d4       busybox-7dff88458-bvjrf
	3ab08f3aed771       60c005f310ff3                                                                                         2 minutes ago        Running             kube-proxy                1                   ded079a057213       kube-proxy-nrsx7
	13b4ae2edced3       12968670680f4                                                                                         2 minutes ago        Running             kindnet-cni               1                   aedbce80ab870       kindnet-lgj66
	bd209bf19cc97       c69fa2e9cbf5f                                                                                         2 minutes ago        Running             coredns                   1                   78def8c2a71e9       coredns-7c65d6cfc9-7jwgv
	fa6222acd1314       c69fa2e9cbf5f                                                                                         2 minutes ago        Running             coredns                   1                   c557d11d235a0       coredns-7c65d6cfc9-44l9n
	87e465b7b95f5       6bab7719df100                                                                                         2 minutes ago        Running             kube-apiserver            2                   84bf5bfc1db95       kube-apiserver-ha-476000
	01c5e9b4fab08       175ffd71cce3d                                                                                         2 minutes ago        Running             kube-controller-manager   2                   7a8e5df4a06d2       kube-controller-manager-ha-476000
	e50b7f6d45d34       38af8ddebf499                                                                                         3 minutes ago        Running             kube-vip                  0                   9ff0bf9fa82a1       kube-vip-ha-476000
	e923cc80604d7       9aa1fad941575                                                                                         3 minutes ago        Running             kube-scheduler            1                   14ddb9d9f440b       kube-scheduler-ha-476000
	89ad0e203b827       2e96e5913fc06                                                                                         3 minutes ago        Running             etcd                      1                   28300cd77661a       etcd-ha-476000
	d6683f4746762       6bab7719df100                                                                                         3 minutes ago        Exited              kube-apiserver            1                   84bf5bfc1db95       kube-apiserver-ha-476000
	06a5f950d0b27       175ffd71cce3d                                                                                         3 minutes ago        Exited              kube-controller-manager   1                   7a8e5df4a06d2       kube-controller-manager-ha-476000
	0fe8d9cd2d8d2       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   11 minutes ago       Exited              busybox                   0                   58dc7b4f775bb       busybox-7dff88458-bvjrf
	6e7030dd2319d       c69fa2e9cbf5f                                                                                         13 minutes ago       Exited              coredns                   0                   19d1dd5324d2b       coredns-7c65d6cfc9-7jwgv
	325909e950c7b       c69fa2e9cbf5f                                                                                         13 minutes ago       Exited              coredns                   0                   4de17e21e7a0f       coredns-7c65d6cfc9-44l9n
	730d4ab163e72       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              13 minutes ago       Exited              kindnet-cni               0                   30119aa4fc19b       kindnet-lgj66
	2d1ef1d1af27d       60c005f310ff3                                                                                         14 minutes ago       Exited              kube-proxy                0                   581372b45e58a       kube-proxy-nrsx7
	8b01a83a0b098       9aa1fad941575                                                                                         14 minutes ago       Exited              kube-scheduler            0                   c0232eed71fc3       kube-scheduler-ha-476000
	c08f45a78a8ec       2e96e5913fc06                                                                                         14 minutes ago       Exited              etcd                      0                   ff9ea0993276b       etcd-ha-476000
	
	
	==> coredns [325909e950c7] <==
	[INFO] 10.244.0.4:41413 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000172004s
	[INFO] 10.244.0.4:39923 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000145289s
	[INFO] 10.244.0.4:55894 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000153357s
	[INFO] 10.244.0.4:52696 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000059737s
	[INFO] 10.244.1.2:45922 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00008915s
	[INFO] 10.244.1.2:44828 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000111301s
	[INFO] 10.244.1.2:53232 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000116513s
	[INFO] 10.244.2.2:38669 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109219s
	[INFO] 10.244.2.2:51776 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000069559s
	[INFO] 10.244.2.2:34317 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000136009s
	[INFO] 10.244.2.2:35638 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001211s
	[INFO] 10.244.2.2:51345 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075754s
	[INFO] 10.244.0.4:53603 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110008s
	[INFO] 10.244.0.4:48703 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000116941s
	[INFO] 10.244.1.2:60563 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000101753s
	[INFO] 10.244.1.2:40746 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000119902s
	[INFO] 10.244.2.2:38053 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000094376s
	[INFO] 10.244.2.2:51713 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000069296s
	[INFO] 10.244.0.4:32805 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00008605s
	[INFO] 10.244.0.4:44664 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000292333s
	[INFO] 10.244.1.2:33360 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000078243s
	[INFO] 10.244.2.2:36409 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159318s
	[INFO] 10.244.2.2:36868 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000094303s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [6e7030dd2319] <==
	[INFO] 10.244.0.4:56870 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000085932s
	[INFO] 10.244.0.4:42671 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000180223s
	[INFO] 10.244.1.2:48098 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102353s
	[INFO] 10.244.1.2:56626 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.00009538s
	[INFO] 10.244.1.2:45195 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000135305s
	[INFO] 10.244.1.2:57387 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000073744s
	[INFO] 10.244.1.2:56567 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000045328s
	[INFO] 10.244.2.2:40253 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000077683s
	[INFO] 10.244.2.2:49008 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000110327s
	[INFO] 10.244.2.2:54182 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000061031s
	[INFO] 10.244.0.4:53519 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000087904s
	[INFO] 10.244.0.4:37380 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000132535s
	[INFO] 10.244.1.2:33397 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000128623s
	[INFO] 10.244.1.2:35879 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00014214s
	[INFO] 10.244.2.2:39230 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133513s
	[INFO] 10.244.2.2:47654 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000054424s
	[INFO] 10.244.0.4:59796 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00007443s
	[INFO] 10.244.0.4:49766 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000103812s
	[INFO] 10.244.1.2:36226 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102458s
	[INFO] 10.244.1.2:35698 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00010282s
	[INFO] 10.244.1.2:40757 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000066548s
	[INFO] 10.244.2.2:44488 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000148719s
	[INFO] 10.244.2.2:40024 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000069743s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [bd209bf19cc9] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:43213 - 10525 "HINFO IN 4125844120146388069.4027558012888257277. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.0104908s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1432599962]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Sep-2024 00:54:01.650) (total time: 30002ms):
	Trace[1432599962]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (00:54:31.653)
	Trace[1432599962]: [30.002427557s] [30.002427557s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[417897734]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Sep-2024 00:54:01.652) (total time: 30002ms):
	Trace[417897734]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (00:54:31.654)
	Trace[417897734]: [30.002368442s] [30.002368442s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1861937109]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Sep-2024 00:54:01.653) (total time: 30001ms):
	Trace[1861937109]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (00:54:31.654)
	Trace[1861937109]: [30.001494446s] [30.001494446s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [fa6222acd131] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:35702 - 33029 "HINFO IN 8241224091513256990.6666502665085127686. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009680676s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1899858293]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Sep-2024 00:54:01.665) (total time: 30001ms):
	Trace[1899858293]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (00:54:31.666)
	Trace[1899858293]: [30.001480741s] [30.001480741s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1985679635]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Sep-2024 00:54:01.669) (total time: 30000ms):
	Trace[1985679635]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (00:54:31.669)
	Trace[1985679635]: [30.000934597s] [30.000934597s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[345146888]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Sep-2024 00:54:01.669) (total time: 30003ms):
	Trace[345146888]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (00:54:31.673)
	Trace[345146888]: [30.003771613s] [30.003771613s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-476000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-476000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=ha-476000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_26T17_42_35_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 00:42:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-476000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 00:56:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 00:53:57 +0000   Fri, 27 Sep 2024 00:42:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 00:53:57 +0000   Fri, 27 Sep 2024 00:42:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 00:53:57 +0000   Fri, 27 Sep 2024 00:42:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 00:53:57 +0000   Fri, 27 Sep 2024 00:42:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-476000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 24c18e25f36040298bb96a7a31469c55
	  System UUID:                99cf4d4f-0000-0000-a72a-447af4e3b1db
	  Boot ID:                    8cf1f24c-8c01-4381-8f8f-6e75f77e6648
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-bvjrf              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-44l9n             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 coredns-7c65d6cfc9-7jwgv             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 etcd-ha-476000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kindnet-lgj66                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      14m
	  kube-system                 kube-apiserver-ha-476000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-476000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-nrsx7                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-476000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-476000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m41s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m39s                  kube-proxy       
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-476000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node ha-476000 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-476000 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     14m                    kubelet          Node ha-476000 status is now: NodeHasSufficientPID
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m                    kubelet          Node ha-476000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                    kubelet          Node ha-476000 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           14m                    node-controller  Node ha-476000 event: Registered Node ha-476000 in Controller
	  Normal  NodeReady                13m                    kubelet          Node ha-476000 status is now: NodeReady
	  Normal  RegisteredNode           12m                    node-controller  Node ha-476000 event: Registered Node ha-476000 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-476000 event: Registered Node ha-476000 in Controller
	  Normal  RegisteredNode           9m34s                  node-controller  Node ha-476000 event: Registered Node ha-476000 in Controller
	  Normal  Starting                 3m22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m22s (x8 over 3m22s)  kubelet          Node ha-476000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m22s (x8 over 3m22s)  kubelet          Node ha-476000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m22s (x7 over 3m22s)  kubelet          Node ha-476000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m49s                  node-controller  Node ha-476000 event: Registered Node ha-476000 in Controller
	  Normal  RegisteredNode           2m35s                  node-controller  Node ha-476000 event: Registered Node ha-476000 in Controller
	
	
	Name:               ha-476000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-476000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=ha-476000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_26T17_43_38_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 00:43:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-476000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 00:56:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 00:54:04 +0000   Fri, 27 Sep 2024 00:43:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 00:54:04 +0000   Fri, 27 Sep 2024 00:43:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 00:54:04 +0000   Fri, 27 Sep 2024 00:43:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 00:54:04 +0000   Fri, 27 Sep 2024 00:54:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-476000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 35bc971223ac4e939cad535ac89bc725
	  System UUID:                58f4445b-0000-0000-bae0-ab27a7b8106e
	  Boot ID:                    7dcb1bbe-ca7a-45f1-9dd9-dc673285b7e4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-gvp8q                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-476000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-hhrtc                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-476000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-476000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-ctdh4                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-476000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-476000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 13m                  kube-proxy       
	  Normal   Starting                 2m22s                kube-proxy       
	  Normal   Starting                 9m38s                kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)    kubelet          Node ha-476000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)    kubelet          Node ha-476000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x7 over 13m)    kubelet          Node ha-476000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  13m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           13m                  node-controller  Node ha-476000-m02 event: Registered Node ha-476000-m02 in Controller
	  Normal   RegisteredNode           12m                  node-controller  Node ha-476000-m02 event: Registered Node ha-476000-m02 in Controller
	  Normal   RegisteredNode           11m                  node-controller  Node ha-476000-m02 event: Registered Node ha-476000-m02 in Controller
	  Normal   NodeAllocatableEnforced  9m43s                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 9m43s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  9m43s                kubelet          Node ha-476000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m43s                kubelet          Node ha-476000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m43s                kubelet          Node ha-476000-m02 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 9m43s                kubelet          Node ha-476000-m02 has been rebooted, boot id: 993826c6-3fde-4076-a7cb-33cc19f1b1bc
	  Normal   RegisteredNode           9m35s                node-controller  Node ha-476000-m02 event: Registered Node ha-476000-m02 in Controller
	  Normal   NodeHasNoDiskPressure    3m2s (x8 over 3m2s)  kubelet          Node ha-476000-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 3m2s                 kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  3m2s (x8 over 3m2s)  kubelet          Node ha-476000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     3m2s (x7 over 3m2s)  kubelet          Node ha-476000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  3m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           2m50s                node-controller  Node ha-476000-m02 event: Registered Node ha-476000-m02 in Controller
	  Normal   RegisteredNode           2m36s                node-controller  Node ha-476000-m02 event: Registered Node ha-476000-m02 in Controller
	
	
	Name:               ha-476000-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-476000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=ha-476000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_26T17_44_54_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 00:44:51 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-476000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 00:47:14 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 27 Sep 2024 00:45:53 +0000   Fri, 27 Sep 2024 00:54:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 27 Sep 2024 00:45:53 +0000   Fri, 27 Sep 2024 00:54:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 27 Sep 2024 00:45:53 +0000   Fri, 27 Sep 2024 00:54:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 27 Sep 2024 00:45:53 +0000   Fri, 27 Sep 2024 00:54:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.7
	  Hostname:    ha-476000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 365f6a31a3d140dba5c1be3b08da7ad2
	  System UUID:                91a54c64-0000-0000-acd8-a07fa14dbb0d
	  Boot ID:                    4ca02f6d-4375-4909-8877-3e005809b499
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-jgndj                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-476000-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-4pnxr                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-ha-476000-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-476000-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-bpsqv                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-476000-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-476000-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-476000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-476000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-476000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                node-controller  Node ha-476000-m03 event: Registered Node ha-476000-m03 in Controller
	  Normal  RegisteredNode           11m                node-controller  Node ha-476000-m03 event: Registered Node ha-476000-m03 in Controller
	  Normal  RegisteredNode           11m                node-controller  Node ha-476000-m03 event: Registered Node ha-476000-m03 in Controller
	  Normal  RegisteredNode           9m35s              node-controller  Node ha-476000-m03 event: Registered Node ha-476000-m03 in Controller
	  Normal  RegisteredNode           2m50s              node-controller  Node ha-476000-m03 event: Registered Node ha-476000-m03 in Controller
	  Normal  RegisteredNode           2m36s              node-controller  Node ha-476000-m03 event: Registered Node ha-476000-m03 in Controller
	  Normal  NodeNotReady             2m10s              node-controller  Node ha-476000-m03 status is now: NodeNotReady
	
	
	Name:               ha-476000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-476000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=ha-476000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_26T17_45_52_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 00:45:52 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-476000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 00:47:13 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 27 Sep 2024 00:46:22 +0000   Fri, 27 Sep 2024 00:54:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 27 Sep 2024 00:46:22 +0000   Fri, 27 Sep 2024 00:54:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 27 Sep 2024 00:46:22 +0000   Fri, 27 Sep 2024 00:54:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 27 Sep 2024 00:46:22 +0000   Fri, 27 Sep 2024 00:54:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-476000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 7bdc03e4e33a47a0a7d85ecb664669d4
	  System UUID:                dcce4501-0000-0000-a378-25a085ede049
	  Boot ID:                    b0d71ae5-8550-430a-94b7-e146e65fc279
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-44vxl       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-5d8nb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-476000-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-476000-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-476000-m04 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           10m                node-controller  Node ha-476000-m04 event: Registered Node ha-476000-m04 in Controller
	  Normal  RegisteredNode           10m                node-controller  Node ha-476000-m04 event: Registered Node ha-476000-m04 in Controller
	  Normal  RegisteredNode           10m                node-controller  Node ha-476000-m04 event: Registered Node ha-476000-m04 in Controller
	  Normal  NodeReady                10m                kubelet          Node ha-476000-m04 status is now: NodeReady
	  Normal  RegisteredNode           9m35s              node-controller  Node ha-476000-m04 event: Registered Node ha-476000-m04 in Controller
	  Normal  RegisteredNode           2m50s              node-controller  Node ha-476000-m04 event: Registered Node ha-476000-m04 in Controller
	  Normal  RegisteredNode           2m36s              node-controller  Node ha-476000-m04 event: Registered Node ha-476000-m04 in Controller
	  Normal  NodeNotReady             2m10s              node-controller  Node ha-476000-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.036532] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.006931] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.697129] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000003] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006778] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.775372] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.244387] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.695216] systemd-fstab-generator[461]: Ignoring "noauto" option for root device
	[  +0.101404] systemd-fstab-generator[473]: Ignoring "noauto" option for root device
	[  +1.958371] systemd-fstab-generator[1093]: Ignoring "noauto" option for root device
	[  +0.251045] systemd-fstab-generator[1131]: Ignoring "noauto" option for root device
	[  +0.050021] kauditd_printk_skb: 101 callbacks suppressed
	[  +0.047173] systemd-fstab-generator[1143]: Ignoring "noauto" option for root device
	[  +0.112931] systemd-fstab-generator[1157]: Ignoring "noauto" option for root device
	[  +2.468376] systemd-fstab-generator[1375]: Ignoring "noauto" option for root device
	[  +0.117710] systemd-fstab-generator[1387]: Ignoring "noauto" option for root device
	[  +0.113441] systemd-fstab-generator[1399]: Ignoring "noauto" option for root device
	[  +0.129593] systemd-fstab-generator[1414]: Ignoring "noauto" option for root device
	[  +0.427728] systemd-fstab-generator[1574]: Ignoring "noauto" option for root device
	[  +6.920294] kauditd_printk_skb: 212 callbacks suppressed
	[ +21.597968] kauditd_printk_skb: 40 callbacks suppressed
	[Sep27 00:54] kauditd_printk_skb: 94 callbacks suppressed
	
	
	==> etcd [89ad0e203b82] <==
	{"level":"warn","ts":"2024-09-27T00:55:41.539753Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:55:46.540673Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:55:46.541012Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:55:51.540995Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:55:51.541410Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:55:56.541854Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:55:56.541895Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:56:01.543083Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:56:01.543179Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:56:06.543927Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:56:06.543948Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:56:11.545083Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:56:11.545205Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:56:16.546548Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:56:16.546812Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:56:21.547452Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:56:21.547479Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:56:26.548475Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:56:26.548565Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:56:31.549392Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:56:31.549456Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:56:36.549771Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:56:36.549785Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:56:41.550781Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:56:41.550810Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	
	
	==> etcd [c08f45a78a8e] <==
	{"level":"warn","ts":"2024-09-27T00:47:41.542035Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-27T00:47:33.744957Z","time spent":"7.797074842s","remote":"127.0.0.1:40790","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":0,"response size":0,"request content":"key:\"/registry/networkpolicies/\" range_end:\"/registry/networkpolicies0\" count_only:true "}
	2024/09/27 00:47:41 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-27T00:47:41.542079Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"299.225057ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-09-27T00:47:41.542107Z","caller":"traceutil/trace.go:171","msg":"trace[2123825160] range","detail":"{range_begin:/registry/health; range_end:; }","duration":"299.252922ms","start":"2024-09-27T00:47:41.242851Z","end":"2024-09-27T00:47:41.542104Z","steps":["trace[2123825160] 'agreement among raft nodes before linearized reading'  (duration: 299.224906ms)"],"step_count":1}
	2024/09/27 00:47:41 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-27T00:47:41.593990Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-27T00:47:41.594018Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-27T00:47:41.602616Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-27T00:47:41.604582Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"c11bfa8a53277726"}
	{"level":"info","ts":"2024-09-27T00:47:41.604604Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"c11bfa8a53277726"}
	{"level":"info","ts":"2024-09-27T00:47:41.604619Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"c11bfa8a53277726"}
	{"level":"info","ts":"2024-09-27T00:47:41.604716Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c11bfa8a53277726"}
	{"level":"info","ts":"2024-09-27T00:47:41.604762Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c11bfa8a53277726"}
	{"level":"info","ts":"2024-09-27T00:47:41.604790Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c11bfa8a53277726"}
	{"level":"info","ts":"2024-09-27T00:47:41.604798Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"c11bfa8a53277726"}
	{"level":"info","ts":"2024-09-27T00:47:41.604802Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"a5e8f6083b0b81f2"}
	{"level":"info","ts":"2024-09-27T00:47:41.604809Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"a5e8f6083b0b81f2"}
	{"level":"info","ts":"2024-09-27T00:47:41.604819Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"a5e8f6083b0b81f2"}
	{"level":"info","ts":"2024-09-27T00:47:41.605484Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"a5e8f6083b0b81f2"}
	{"level":"info","ts":"2024-09-27T00:47:41.605507Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"a5e8f6083b0b81f2"}
	{"level":"info","ts":"2024-09-27T00:47:41.605546Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"a5e8f6083b0b81f2"}
	{"level":"info","ts":"2024-09-27T00:47:41.605556Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"a5e8f6083b0b81f2"}
	{"level":"info","ts":"2024-09-27T00:47:41.607550Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-09-27T00:47:41.607595Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-09-27T00:47:41.607615Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-476000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> kernel <==
	 00:56:42 up 3 min,  0 users,  load average: 0.54, 0.40, 0.17
	Linux ha-476000 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [13b4ae2edced] <==
	I0927 00:56:12.492701       1 main.go:322] Node ha-476000-m04 has CIDR [10.244.3.0/24] 
	I0927 00:56:22.489348       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0927 00:56:22.489709       1 main.go:322] Node ha-476000-m03 has CIDR [10.244.2.0/24] 
	I0927 00:56:22.490058       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0927 00:56:22.490139       1 main.go:322] Node ha-476000-m04 has CIDR [10.244.3.0/24] 
	I0927 00:56:22.490264       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0927 00:56:22.490346       1 main.go:299] handling current node
	I0927 00:56:22.490376       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0927 00:56:22.490394       1 main.go:322] Node ha-476000-m02 has CIDR [10.244.1.0/24] 
	I0927 00:56:32.491793       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0927 00:56:32.491867       1 main.go:322] Node ha-476000-m03 has CIDR [10.244.2.0/24] 
	I0927 00:56:32.491992       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0927 00:56:32.492035       1 main.go:322] Node ha-476000-m04 has CIDR [10.244.3.0/24] 
	I0927 00:56:32.492099       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0927 00:56:32.492142       1 main.go:299] handling current node
	I0927 00:56:32.492170       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0927 00:56:32.492208       1 main.go:322] Node ha-476000-m02 has CIDR [10.244.1.0/24] 
	I0927 00:56:42.489088       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0927 00:56:42.489368       1 main.go:299] handling current node
	I0927 00:56:42.489471       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0927 00:56:42.489527       1 main.go:322] Node ha-476000-m02 has CIDR [10.244.1.0/24] 
	I0927 00:56:42.489873       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0927 00:56:42.490004       1 main.go:322] Node ha-476000-m03 has CIDR [10.244.2.0/24] 
	I0927 00:56:42.490158       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0927 00:56:42.490189       1 main.go:322] Node ha-476000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [730d4ab163e7] <==
	I0927 00:47:03.705461       1 main.go:322] Node ha-476000-m04 has CIDR [10.244.3.0/24] 
	I0927 00:47:13.713791       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0927 00:47:13.713985       1 main.go:299] handling current node
	I0927 00:47:13.714102       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0927 00:47:13.714214       1 main.go:322] Node ha-476000-m02 has CIDR [10.244.1.0/24] 
	I0927 00:47:13.714414       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0927 00:47:13.714545       1 main.go:322] Node ha-476000-m03 has CIDR [10.244.2.0/24] 
	I0927 00:47:13.714946       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0927 00:47:13.715065       1 main.go:322] Node ha-476000-m04 has CIDR [10.244.3.0/24] 
	I0927 00:47:23.710748       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0927 00:47:23.710778       1 main.go:322] Node ha-476000-m02 has CIDR [10.244.1.0/24] 
	I0927 00:47:23.710966       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0927 00:47:23.711202       1 main.go:322] Node ha-476000-m03 has CIDR [10.244.2.0/24] 
	I0927 00:47:23.711295       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0927 00:47:23.711303       1 main.go:322] Node ha-476000-m04 has CIDR [10.244.3.0/24] 
	I0927 00:47:23.711508       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0927 00:47:23.711595       1 main.go:299] handling current node
	I0927 00:47:33.704824       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0927 00:47:33.704897       1 main.go:322] Node ha-476000-m02 has CIDR [10.244.1.0/24] 
	I0927 00:47:33.705242       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0927 00:47:33.705307       1 main.go:322] Node ha-476000-m03 has CIDR [10.244.2.0/24] 
	I0927 00:47:33.705486       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0927 00:47:33.705818       1 main.go:322] Node ha-476000-m04 has CIDR [10.244.3.0/24] 
	I0927 00:47:33.705995       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0927 00:47:33.706008       1 main.go:299] handling current node
	
	
	==> kube-apiserver [87e465b7b95f] <==
	I0927 00:54:02.884947       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0927 00:54:02.884955       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0927 00:54:02.943365       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0927 00:54:02.943570       1 policy_source.go:224] refreshing policies
	I0927 00:54:02.949648       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0927 00:54:02.975777       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0927 00:54:02.975897       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0927 00:54:02.975835       1 shared_informer.go:320] Caches are synced for configmaps
	I0927 00:54:02.976591       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0927 00:54:02.977323       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0927 00:54:02.977419       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0927 00:54:02.977565       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0927 00:54:02.982008       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0927 00:54:02.982182       1 shared_informer.go:320] Caches are synced for node_authorizer
	W0927 00:54:02.987432       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.6]
	I0927 00:54:02.987619       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0927 00:54:02.987707       1 aggregator.go:171] initial CRD sync complete...
	I0927 00:54:02.987750       1 autoregister_controller.go:144] Starting autoregister controller
	I0927 00:54:02.987857       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0927 00:54:02.987898       1 cache.go:39] Caches are synced for autoregister controller
	I0927 00:54:02.988709       1 controller.go:615] quota admission added evaluator for: endpoints
	I0927 00:54:02.993982       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0927 00:54:02.997126       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0927 00:54:03.884450       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0927 00:54:04.211694       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5 192.169.0.6]
	
	
	==> kube-apiserver [d6683f474676] <==
	I0927 00:53:26.693239       1 options.go:228] external host was not specified, using 192.169.0.5
	I0927 00:53:26.695952       1 server.go:142] Version: v1.31.1
	I0927 00:53:26.696173       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 00:53:27.299904       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0927 00:53:27.320033       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0927 00:53:27.330041       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0927 00:53:27.330098       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0927 00:53:27.332141       1 instance.go:232] Using reconciler: lease
	W0927 00:53:47.293920       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0927 00:53:47.294149       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0927 00:53:47.333433       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [01c5e9b4fab0] <==
	I0927 00:54:06.445126       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0927 00:54:06.447687       1 shared_informer.go:320] Caches are synced for resource quota
	I0927 00:54:06.473417       1 shared_informer.go:320] Caches are synced for daemon sets
	I0927 00:54:06.496437       1 shared_informer.go:320] Caches are synced for resource quota
	I0927 00:54:06.921734       1 shared_informer.go:320] Caches are synced for garbage collector
	I0927 00:54:06.972377       1 shared_informer.go:320] Caches are synced for garbage collector
	I0927 00:54:06.972441       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0927 00:54:07.185942       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.202µs"
	I0927 00:54:09.276645       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="61.828631ms"
	I0927 00:54:09.276726       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="27.067µs"
	I0927 00:54:32.998333       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-476000-m04"
	I0927 00:54:32.998470       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-476000-m03"
	I0927 00:54:33.020582       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-476000-m03"
	I0927 00:54:33.020882       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-476000-m04"
	I0927 00:54:33.070337       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="7.029804ms"
	I0927 00:54:33.070565       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="96.493µs"
	I0927 00:54:36.474604       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-476000-m03"
	I0927 00:54:38.190557       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-476000-m03"
	I0927 00:54:40.584626       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-h7qwt EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-h7qwt\": the object has been modified; please apply your changes to the latest version and try again"
	I0927 00:54:40.585022       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"3537638a-d8ae-4b35-b930-21aeb412efa9", APIVersion:"v1", ResourceVersion:"270", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-h7qwt EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-h7qwt": the object has been modified; please apply your changes to the latest version and try again
	I0927 00:54:40.589666       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="48.410037ms"
	I0927 00:54:40.614904       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="25.040724ms"
	I0927 00:54:40.615187       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="45.324µs"
	I0927 00:54:46.573579       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-476000-m04"
	I0927 00:54:48.277366       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-476000-m04"
	
	
	==> kube-controller-manager [06a5f950d0b2] <==
	I0927 00:53:27.325939       1 serving.go:386] Generated self-signed cert in-memory
	I0927 00:53:28.243164       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0927 00:53:28.243279       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 00:53:28.245422       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0927 00:53:28.245777       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0927 00:53:28.245999       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0927 00:53:28.246030       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0927 00:53:48.339070       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.169.0.5:8443/healthz\": dial tcp 192.169.0.5:8443: connect: connection refused"
	
	
	==> kube-proxy [2d1ef1d1af27] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0927 00:42:39.294950       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0927 00:42:39.305827       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0927 00:42:39.314387       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 00:42:39.360026       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0927 00:42:39.360068       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0927 00:42:39.360085       1 server_linux.go:169] "Using iptables Proxier"
	I0927 00:42:39.362140       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 00:42:39.362382       1 server.go:483] "Version info" version="v1.31.1"
	I0927 00:42:39.362411       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 00:42:39.365397       1 config.go:199] "Starting service config controller"
	I0927 00:42:39.365470       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 00:42:39.365636       1 config.go:105] "Starting endpoint slice config controller"
	I0927 00:42:39.365692       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 00:42:39.366725       1 config.go:328] "Starting node config controller"
	I0927 00:42:39.366799       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 00:42:39.466084       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0927 00:42:39.466107       1 shared_informer.go:320] Caches are synced for service config
	I0927 00:42:39.468057       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [3ab08f3aed77] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0927 00:54:02.572463       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0927 00:54:02.595215       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0927 00:54:02.595477       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 00:54:02.710300       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0927 00:54:02.710322       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0927 00:54:02.710339       1 server_linux.go:169] "Using iptables Proxier"
	I0927 00:54:02.714167       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 00:54:02.715628       1 server.go:483] "Version info" version="v1.31.1"
	I0927 00:54:02.715707       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 00:54:02.718471       1 config.go:199] "Starting service config controller"
	I0927 00:54:02.719333       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 00:54:02.719741       1 config.go:105] "Starting endpoint slice config controller"
	I0927 00:54:02.719810       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 00:54:02.721272       1 config.go:328] "Starting node config controller"
	I0927 00:54:02.721390       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 00:54:02.820358       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0927 00:54:02.820547       1 shared_informer.go:320] Caches are synced for service config
	I0927 00:54:02.824323       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8b01a83a0b09] <==
	E0927 00:45:52.380874       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-mm66p\": pod kube-proxy-mm66p is already assigned to node \"ha-476000-m04\"" pod="kube-system/kube-proxy-mm66p"
	E0927 00:45:52.381463       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-44vxl\": pod kindnet-44vxl is already assigned to node \"ha-476000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-44vxl" node="ha-476000-m04"
	E0927 00:45:52.381533       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 488a3806-d7c1-4397-bff8-00d9ea3cb984(kube-system/kindnet-44vxl) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-44vxl"
	E0927 00:45:52.381617       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-44vxl\": pod kindnet-44vxl is already assigned to node \"ha-476000-m04\"" pod="kube-system/kindnet-44vxl"
	I0927 00:45:52.381654       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-44vxl" node="ha-476000-m04"
	E0927 00:45:52.382881       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-gtnxm\": pod kindnet-gtnxm is already assigned to node \"ha-476000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-gtnxm" node="ha-476000-m04"
	E0927 00:45:52.383371       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod c96b1801-d5cd-47bc-8555-43224fd5668c(kube-system/kindnet-gtnxm) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-gtnxm"
	E0927 00:45:52.383419       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-gtnxm\": pod kindnet-gtnxm is already assigned to node \"ha-476000-m04\"" pod="kube-system/kindnet-gtnxm"
	I0927 00:45:52.383438       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-gtnxm" node="ha-476000-m04"
	E0927 00:45:52.385915       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-5d8nb\": pod kube-proxy-5d8nb is already assigned to node \"ha-476000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-5d8nb" node="ha-476000-m04"
	E0927 00:45:52.386403       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 1e9c0178-2d58-4ca1-9fbe-c8e54d91bf1a(kube-system/kube-proxy-5d8nb) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-5d8nb"
	E0927 00:45:52.388489       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-5d8nb\": pod kube-proxy-5d8nb is already assigned to node \"ha-476000-m04\"" pod="kube-system/kube-proxy-5d8nb"
	I0927 00:45:52.388818       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-5d8nb" node="ha-476000-m04"
	E0927 00:45:52.414440       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-p2r4t\": pod kindnet-p2r4t is already assigned to node \"ha-476000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-p2r4t" node="ha-476000-m04"
	E0927 00:45:52.414491       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e7daae81-cf6d-498e-9458-8613a0c1f174(kube-system/kindnet-p2r4t) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-p2r4t"
	E0927 00:45:52.414504       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-p2r4t\": pod kindnet-p2r4t is already assigned to node \"ha-476000-m04\"" pod="kube-system/kindnet-p2r4t"
	I0927 00:45:52.414830       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-p2r4t" node="ha-476000-m04"
	E0927 00:45:52.434469       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-f2tbl\": pod kube-proxy-f2tbl is already assigned to node \"ha-476000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-f2tbl" node="ha-476000-m04"
	E0927 00:45:52.434547       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod ce1fa3d7-adbb-4d4d-bd23-a1e60ee54d5b(kube-system/kube-proxy-f2tbl) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-f2tbl"
	E0927 00:45:52.434998       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-f2tbl\": pod kube-proxy-f2tbl is already assigned to node \"ha-476000-m04\"" pod="kube-system/kube-proxy-f2tbl"
	I0927 00:45:52.435043       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-f2tbl" node="ha-476000-m04"
	I0927 00:47:41.631073       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0927 00:47:41.633242       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0927 00:47:41.634639       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0927 00:47:41.635978       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e923cc80604d] <==
	W0927 00:53:55.890712       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0927 00:53:55.890825       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:53:55.916618       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0927 00:53:55.916669       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:53:56.112443       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0927 00:53:56.112541       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:53:56.325586       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0927 00:53:56.325680       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:53:56.333523       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0927 00:53:56.333592       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:53:57.242866       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0927 00:53:57.243040       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:53:57.398430       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0927 00:53:57.398522       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:53:57.562966       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0927 00:53:57.563196       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:53:58.300576       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0927 00:53:58.300855       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:53:58.356734       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0927 00:53:58.356802       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:54:02.892809       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0927 00:54:02.892856       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 00:54:02.893077       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0927 00:54:02.893208       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0927 00:54:02.956308       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 27 00:54:01 ha-476000 kubelet[1581]: I0927 00:54:01.236450    1581 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="617d5efb7a14c0369e33fba284407db0" path="/var/lib/kubelet/pods/617d5efb7a14c0369e33fba284407db0/volumes"
	Sep 27 00:54:01 ha-476000 kubelet[1581]: I0927 00:54:01.850956    1581 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef132416f65d445e2be52f1f35d402e4103f11df5abe57373ffacf06538460a2"
	Sep 27 00:54:01 ha-476000 kubelet[1581]: I0927 00:54:01.898449    1581 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82fb727d3b4ab9beb6771fe42b02b13cfa819ec6e94565fc85eb5e44849131dc"
	Sep 27 00:54:01 ha-476000 kubelet[1581]: I0927 00:54:01.919692    1581 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c557d11d235a0ab874d2738bef5a997f95275377aa0e92ea879bcb3ddbec2481"
	Sep 27 00:54:02 ha-476000 kubelet[1581]: I0927 00:54:02.046801    1581 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ded079a0572139d8da280864d2cf23e26a7a74761427fdb6aa8247bf1b618b63"
	Sep 27 00:54:19 ha-476000 kubelet[1581]: I0927 00:54:19.211634    1581 scope.go:117] "RemoveContainer" containerID="3e1d19d36ca870b70f194e613fddfe9196146ec03c8bbb41afad1f4d75ce6405"
	Sep 27 00:54:19 ha-476000 kubelet[1581]: E0927 00:54:19.255670    1581 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 27 00:54:19 ha-476000 kubelet[1581]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 27 00:54:19 ha-476000 kubelet[1581]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 27 00:54:19 ha-476000 kubelet[1581]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 00:54:19 ha-476000 kubelet[1581]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 27 00:54:32 ha-476000 kubelet[1581]: I0927 00:54:32.420831    1581 scope.go:117] "RemoveContainer" containerID="4e07ad9ca26cc4761a54659f0b247156a2737aea8eb7e117dc886da3b1912592"
	Sep 27 00:54:32 ha-476000 kubelet[1581]: I0927 00:54:32.421022    1581 scope.go:117] "RemoveContainer" containerID="182d3576c4be84b985fdcac8be52d4bc1daa03061cca1c9af1ea1719cc87ef93"
	Sep 27 00:54:32 ha-476000 kubelet[1581]: E0927 00:54:32.421101    1581 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(e3e367a7-6cda-4177-a81d-7897333308d7)\"" pod="kube-system/storage-provisioner" podUID="e3e367a7-6cda-4177-a81d-7897333308d7"
	Sep 27 00:54:47 ha-476000 kubelet[1581]: I0927 00:54:47.232370    1581 scope.go:117] "RemoveContainer" containerID="182d3576c4be84b985fdcac8be52d4bc1daa03061cca1c9af1ea1719cc87ef93"
	Sep 27 00:55:19 ha-476000 kubelet[1581]: E0927 00:55:19.247407    1581 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 27 00:55:19 ha-476000 kubelet[1581]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 27 00:55:19 ha-476000 kubelet[1581]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 27 00:55:19 ha-476000 kubelet[1581]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 00:55:19 ha-476000 kubelet[1581]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 27 00:56:19 ha-476000 kubelet[1581]: E0927 00:56:19.247959    1581 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 27 00:56:19 ha-476000 kubelet[1581]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 27 00:56:19 ha-476000 kubelet[1581]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 27 00:56:19 ha-476000 kubelet[1581]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 00:56:19 ha-476000 kubelet[1581]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-476000 -n ha-476000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-476000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (4.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (302.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-476000 --control-plane -v=7 --alsologtostderr
E0926 17:58:14.418257    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/client.crt: no such file or directory" logger="UnhandledError"
E0926 18:00:36.849044    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/functional-748000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p ha-476000 --control-plane -v=7 --alsologtostderr: exit status 80 (4m58.23193826s)

                                                
                                                
-- stdout --
	* Adding node m05 to cluster ha-476000 as [worker control-plane]
	* Starting "ha-476000-m05" control-plane node in "ha-476000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 17:56:44.092135    4295 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:56:44.092436    4295 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:56:44.092441    4295 out.go:358] Setting ErrFile to fd 2...
	I0926 17:56:44.092445    4295 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:56:44.092623    4295 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1128/.minikube/bin
	I0926 17:56:44.093006    4295 mustload.go:65] Loading cluster: ha-476000
	I0926 17:56:44.093357    4295 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:56:44.093727    4295 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:56:44.093773    4295 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:56:44.102204    4295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52202
	I0926 17:56:44.102623    4295 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:56:44.103041    4295 main.go:141] libmachine: Using API Version  1
	I0926 17:56:44.103082    4295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:56:44.103325    4295 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:56:44.103455    4295 main.go:141] libmachine: (ha-476000) Calling .GetState
	I0926 17:56:44.103552    4295 main.go:141] libmachine: (ha-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:56:44.103622    4295 main.go:141] libmachine: (ha-476000) DBG | hyperkit pid from json: 4191
	I0926 17:56:44.104656    4295 host.go:66] Checking if "ha-476000" exists ...
	I0926 17:56:44.104907    4295 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:56:44.104931    4295 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:56:44.113771    4295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52204
	I0926 17:56:44.114127    4295 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:56:44.114507    4295 main.go:141] libmachine: Using API Version  1
	I0926 17:56:44.114531    4295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:56:44.114742    4295 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:56:44.114850    4295 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:56:44.115208    4295 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:56:44.115245    4295 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:56:44.123719    4295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52206
	I0926 17:56:44.124038    4295 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:56:44.124383    4295 main.go:141] libmachine: Using API Version  1
	I0926 17:56:44.124400    4295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:56:44.124640    4295 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:56:44.124748    4295 main.go:141] libmachine: (ha-476000-m02) Calling .GetState
	I0926 17:56:44.124833    4295 main.go:141] libmachine: (ha-476000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:56:44.124905    4295 main.go:141] libmachine: (ha-476000-m02) DBG | hyperkit pid from json: 4198
	I0926 17:56:44.125909    4295 host.go:66] Checking if "ha-476000-m02" exists ...
	I0926 17:56:44.126184    4295 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:56:44.126209    4295 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:56:44.134948    4295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52208
	I0926 17:56:44.135286    4295 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:56:44.135630    4295 main.go:141] libmachine: Using API Version  1
	I0926 17:56:44.135643    4295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:56:44.135851    4295 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:56:44.135957    4295 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	I0926 17:56:44.136302    4295 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:56:44.136332    4295 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:56:44.144766    4295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52210
	I0926 17:56:44.145102    4295 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:56:44.145412    4295 main.go:141] libmachine: Using API Version  1
	I0926 17:56:44.145422    4295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:56:44.145649    4295 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:56:44.145765    4295 main.go:141] libmachine: (ha-476000-m03) Calling .GetState
	I0926 17:56:44.145857    4295 main.go:141] libmachine: (ha-476000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:56:44.145933    4295 main.go:141] libmachine: (ha-476000-m03) DBG | hyperkit pid from json: 4226
	I0926 17:56:44.146921    4295 host.go:66] Checking if "ha-476000-m03" exists ...
	I0926 17:56:44.147208    4295 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:56:44.147233    4295 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:56:44.155766    4295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52212
	I0926 17:56:44.156119    4295 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:56:44.156434    4295 main.go:141] libmachine: Using API Version  1
	I0926 17:56:44.156459    4295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:56:44.156682    4295 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:56:44.156795    4295 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	I0926 17:56:44.156895    4295 api_server.go:166] Checking apiserver status ...
	I0926 17:56:44.156964    4295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 17:56:44.156982    4295 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:56:44.157090    4295 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:56:44.157172    4295 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:56:44.157259    4295 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:56:44.157345    4295 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/id_rsa Username:docker}
	I0926 17:56:44.199782    4295 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2309/cgroup
	W0926 17:56:44.208962    4295 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2309/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0926 17:56:44.209025    4295 ssh_runner.go:195] Run: ls
	I0926 17:56:44.212187    4295 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0926 17:56:44.215539    4295 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0926 17:56:44.237058    4295 out.go:177] * Adding node m05 to cluster ha-476000 as [worker control-plane]
	I0926 17:56:44.257791    4295 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:56:44.257892    4295 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/config.json ...
	I0926 17:56:44.279867    4295 out.go:177] * Starting "ha-476000-m05" control-plane node in "ha-476000" cluster
	I0926 17:56:44.300721    4295 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 17:56:44.300772    4295 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0926 17:56:44.300788    4295 cache.go:56] Caching tarball of preloaded images
	I0926 17:56:44.300922    4295 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0926 17:56:44.300934    4295 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 17:56:44.301013    4295 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/config.json ...
	I0926 17:56:44.301556    4295 start.go:360] acquireMachinesLock for ha-476000-m05: {Name:mk62b0ec9b6170d1971239573141a625c4a2621e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 17:56:44.301617    4295 start.go:364] duration metric: took 44.79µs to acquireMachinesLock for "ha-476000-m05"
	I0926 17:56:44.301636    4295 start.go:93] Provisioning new machine with config: &{Name:ha-476000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:ha-476000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m05 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[ambassador:false
auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m05 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:true Worker:true}
	I0926 17:56:44.301726    4295 start.go:125] createHost starting for "m05" (driver="hyperkit")
	I0926 17:56:44.322622    4295 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0926 17:56:44.322778    4295 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:56:44.322806    4295 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:56:44.331165    4295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52216
	I0926 17:56:44.331524    4295 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:56:44.331840    4295 main.go:141] libmachine: Using API Version  1
	I0926 17:56:44.331851    4295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:56:44.332059    4295 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:56:44.332165    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetMachineName
	I0926 17:56:44.332253    4295 main.go:141] libmachine: (ha-476000-m05) Calling .DriverName
	I0926 17:56:44.332345    4295 start.go:159] libmachine.API.Create for "ha-476000" (driver="hyperkit")
	I0926 17:56:44.332371    4295 client.go:168] LocalClient.Create starting
	I0926 17:56:44.332401    4295 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem
	I0926 17:56:44.332453    4295 main.go:141] libmachine: Decoding PEM data...
	I0926 17:56:44.332471    4295 main.go:141] libmachine: Parsing certificate...
	I0926 17:56:44.332528    4295 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem
	I0926 17:56:44.332571    4295 main.go:141] libmachine: Decoding PEM data...
	I0926 17:56:44.332582    4295 main.go:141] libmachine: Parsing certificate...
	I0926 17:56:44.332596    4295 main.go:141] libmachine: Running pre-create checks...
	I0926 17:56:44.332601    4295 main.go:141] libmachine: (ha-476000-m05) Calling .PreCreateCheck
	I0926 17:56:44.332679    4295 main.go:141] libmachine: (ha-476000-m05) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:56:44.332728    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetConfigRaw
	I0926 17:56:44.333293    4295 main.go:141] libmachine: Creating machine...
	I0926 17:56:44.333301    4295 main.go:141] libmachine: (ha-476000-m05) Calling .Create
	I0926 17:56:44.333368    4295 main.go:141] libmachine: (ha-476000-m05) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:56:44.333492    4295 main.go:141] libmachine: (ha-476000-m05) DBG | I0926 17:56:44.333365    4303 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19711-1128/.minikube
	I0926 17:56:44.333548    4295 main.go:141] libmachine: (ha-476000-m05) Downloading /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1128/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0926 17:56:44.570055    4295 main.go:141] libmachine: (ha-476000-m05) DBG | I0926 17:56:44.569995    4303 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m05/id_rsa...
	I0926 17:56:44.698299    4295 main.go:141] libmachine: (ha-476000-m05) DBG | I0926 17:56:44.698236    4303 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m05/ha-476000-m05.rawdisk...
	I0926 17:56:44.698328    4295 main.go:141] libmachine: (ha-476000-m05) DBG | Writing magic tar header
	I0926 17:56:44.698345    4295 main.go:141] libmachine: (ha-476000-m05) DBG | Writing SSH key tar header
	I0926 17:56:44.698983    4295 main.go:141] libmachine: (ha-476000-m05) DBG | I0926 17:56:44.698951    4303 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m05 ...
	I0926 17:56:45.118553    4295 main.go:141] libmachine: (ha-476000-m05) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:56:45.118567    4295 main.go:141] libmachine: (ha-476000-m05) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m05/hyperkit.pid
	I0926 17:56:45.118606    4295 main.go:141] libmachine: (ha-476000-m05) DBG | Using UUID 1a573666-ef35-4d90-9ae1-c871c4ae6371
	I0926 17:56:45.144353    4295 main.go:141] libmachine: (ha-476000-m05) DBG | Generated MAC a6:b7:dc:39:a:39
	I0926 17:56:45.144371    4295 main.go:141] libmachine: (ha-476000-m05) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000
	I0926 17:56:45.144410    4295 main.go:141] libmachine: (ha-476000-m05) DBG | 2024/09/26 17:56:45 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m05", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1a573666-ef35-4d90-9ae1-c871c4ae6371", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m05/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m05/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m05/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0926 17:56:45.144438    4295 main.go:141] libmachine: (ha-476000-m05) DBG | 2024/09/26 17:56:45 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m05", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1a573666-ef35-4d90-9ae1-c871c4ae6371", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m05/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m05/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m05/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0926 17:56:45.144512    4295 main.go:141] libmachine: (ha-476000-m05) DBG | 2024/09/26 17:56:45 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m05/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "1a573666-ef35-4d90-9ae1-c871c4ae6371", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m05/ha-476000-m05.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m05/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m05/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m05/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m05/bzimage,/Users/jenkins/minikube-integration/19711-1128/.minikube/machine
s/ha-476000-m05/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000"}
	I0926 17:56:45.144555    4295 main.go:141] libmachine: (ha-476000-m05) DBG | 2024/09/26 17:56:45 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m05/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 1a573666-ef35-4d90-9ae1-c871c4ae6371 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m05/ha-476000-m05.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m05/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m05/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m05/console-ring -f kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m05/bzimage,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m05/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000"
	I0926 17:56:45.144577    4295 main.go:141] libmachine: (ha-476000-m05) DBG | 2024/09/26 17:56:45 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0926 17:56:45.147467    4295 main.go:141] libmachine: (ha-476000-m05) DBG | 2024/09/26 17:56:45 DEBUG: hyperkit: Pid is 4305
	I0926 17:56:45.147890    4295 main.go:141] libmachine: (ha-476000-m05) DBG | Attempt 0
	I0926 17:56:45.147908    4295 main.go:141] libmachine: (ha-476000-m05) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:56:45.148005    4295 main.go:141] libmachine: (ha-476000-m05) DBG | hyperkit pid from json: 4305
	I0926 17:56:45.148922    4295 main.go:141] libmachine: (ha-476000-m05) DBG | Searching for a6:b7:dc:39:a:39 in /var/db/dhcpd_leases ...
	I0926 17:56:45.149005    4295 main.go:141] libmachine: (ha-476000-m05) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0926 17:56:45.149024    4295 main.go:141] libmachine: (ha-476000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 17:56:45.149046    4295 main.go:141] libmachine: (ha-476000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 17:56:45.149077    4295 main.go:141] libmachine: (ha-476000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 17:56:45.149087    4295 main.go:141] libmachine: (ha-476000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 17:56:45.149097    4295 main.go:141] libmachine: (ha-476000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 17:56:45.149109    4295 main.go:141] libmachine: (ha-476000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 17:56:45.149118    4295 main.go:141] libmachine: (ha-476000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 17:56:45.155524    4295 main.go:141] libmachine: (ha-476000-m05) DBG | 2024/09/26 17:56:45 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I0926 17:56:45.164188    4295 main.go:141] libmachine: (ha-476000-m05) DBG | 2024/09/26 17:56:45 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m05/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0926 17:56:45.165012    4295 main.go:141] libmachine: (ha-476000-m05) DBG | 2024/09/26 17:56:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 17:56:45.165031    4295 main.go:141] libmachine: (ha-476000-m05) DBG | 2024/09/26 17:56:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 17:56:45.165040    4295 main.go:141] libmachine: (ha-476000-m05) DBG | 2024/09/26 17:56:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 17:56:45.165048    4295 main.go:141] libmachine: (ha-476000-m05) DBG | 2024/09/26 17:56:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 17:56:45.554073    4295 main.go:141] libmachine: (ha-476000-m05) DBG | 2024/09/26 17:56:45 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0926 17:56:45.554089    4295 main.go:141] libmachine: (ha-476000-m05) DBG | 2024/09/26 17:56:45 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0926 17:56:45.668741    4295 main.go:141] libmachine: (ha-476000-m05) DBG | 2024/09/26 17:56:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 17:56:45.668761    4295 main.go:141] libmachine: (ha-476000-m05) DBG | 2024/09/26 17:56:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 17:56:45.668771    4295 main.go:141] libmachine: (ha-476000-m05) DBG | 2024/09/26 17:56:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 17:56:45.668779    4295 main.go:141] libmachine: (ha-476000-m05) DBG | 2024/09/26 17:56:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 17:56:45.669635    4295 main.go:141] libmachine: (ha-476000-m05) DBG | 2024/09/26 17:56:45 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0926 17:56:45.669644    4295 main.go:141] libmachine: (ha-476000-m05) DBG | 2024/09/26 17:56:45 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0926 17:56:47.149244    4295 main.go:141] libmachine: (ha-476000-m05) DBG | Attempt 1
	I0926 17:56:47.149264    4295 main.go:141] libmachine: (ha-476000-m05) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:56:47.149341    4295 main.go:141] libmachine: (ha-476000-m05) DBG | hyperkit pid from json: 4305
	I0926 17:56:47.150184    4295 main.go:141] libmachine: (ha-476000-m05) DBG | Searching for a6:b7:dc:39:a:39 in /var/db/dhcpd_leases ...
	I0926 17:56:47.150240    4295 main.go:141] libmachine: (ha-476000-m05) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0926 17:56:47.150251    4295 main.go:141] libmachine: (ha-476000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 17:56:47.150263    4295 main.go:141] libmachine: (ha-476000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 17:56:47.150274    4295 main.go:141] libmachine: (ha-476000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 17:56:47.150281    4295 main.go:141] libmachine: (ha-476000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 17:56:47.150288    4295 main.go:141] libmachine: (ha-476000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 17:56:47.150307    4295 main.go:141] libmachine: (ha-476000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 17:56:47.150330    4295 main.go:141] libmachine: (ha-476000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 17:56:49.151267    4295 main.go:141] libmachine: (ha-476000-m05) DBG | Attempt 2
	I0926 17:56:49.151285    4295 main.go:141] libmachine: (ha-476000-m05) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:56:49.151361    4295 main.go:141] libmachine: (ha-476000-m05) DBG | hyperkit pid from json: 4305
	I0926 17:56:49.152200    4295 main.go:141] libmachine: (ha-476000-m05) DBG | Searching for a6:b7:dc:39:a:39 in /var/db/dhcpd_leases ...
	I0926 17:56:49.152244    4295 main.go:141] libmachine: (ha-476000-m05) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0926 17:56:49.152259    4295 main.go:141] libmachine: (ha-476000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 17:56:49.152280    4295 main.go:141] libmachine: (ha-476000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 17:56:49.152301    4295 main.go:141] libmachine: (ha-476000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 17:56:49.152311    4295 main.go:141] libmachine: (ha-476000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 17:56:49.152323    4295 main.go:141] libmachine: (ha-476000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 17:56:49.152330    4295 main.go:141] libmachine: (ha-476000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 17:56:49.152336    4295 main.go:141] libmachine: (ha-476000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 17:56:51.153341    4295 main.go:141] libmachine: (ha-476000-m05) DBG | Attempt 3
	I0926 17:56:51.153372    4295 main.go:141] libmachine: (ha-476000-m05) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:56:51.153441    4295 main.go:141] libmachine: (ha-476000-m05) DBG | hyperkit pid from json: 4305
	I0926 17:56:51.154239    4295 main.go:141] libmachine: (ha-476000-m05) DBG | Searching for a6:b7:dc:39:a:39 in /var/db/dhcpd_leases ...
	I0926 17:56:51.154276    4295 main.go:141] libmachine: (ha-476000-m05) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0926 17:56:51.154286    4295 main.go:141] libmachine: (ha-476000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 17:56:51.154293    4295 main.go:141] libmachine: (ha-476000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 17:56:51.154302    4295 main.go:141] libmachine: (ha-476000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 17:56:51.154309    4295 main.go:141] libmachine: (ha-476000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 17:56:51.154316    4295 main.go:141] libmachine: (ha-476000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 17:56:51.154322    4295 main.go:141] libmachine: (ha-476000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 17:56:51.154334    4295 main.go:141] libmachine: (ha-476000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 17:56:51.347969    4295 main.go:141] libmachine: (ha-476000-m05) DBG | 2024/09/26 17:56:51 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0926 17:56:51.348026    4295 main.go:141] libmachine: (ha-476000-m05) DBG | 2024/09/26 17:56:51 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0926 17:56:51.348036    4295 main.go:141] libmachine: (ha-476000-m05) DBG | 2024/09/26 17:56:51 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0926 17:56:51.370392    4295 main.go:141] libmachine: (ha-476000-m05) DBG | 2024/09/26 17:56:51 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0926 17:56:53.155891    4295 main.go:141] libmachine: (ha-476000-m05) DBG | Attempt 4
	I0926 17:56:53.155907    4295 main.go:141] libmachine: (ha-476000-m05) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:56:53.155985    4295 main.go:141] libmachine: (ha-476000-m05) DBG | hyperkit pid from json: 4305
	I0926 17:56:53.156782    4295 main.go:141] libmachine: (ha-476000-m05) DBG | Searching for a6:b7:dc:39:a:39 in /var/db/dhcpd_leases ...
	I0926 17:56:53.156824    4295 main.go:141] libmachine: (ha-476000-m05) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0926 17:56:53.156836    4295 main.go:141] libmachine: (ha-476000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f753e9}
	I0926 17:56:53.156862    4295 main.go:141] libmachine: (ha-476000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 17:56:53.156870    4295 main.go:141] libmachine: (ha-476000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 17:56:53.156877    4295 main.go:141] libmachine: (ha-476000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 17:56:53.156886    4295 main.go:141] libmachine: (ha-476000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:4a:d:e6:4e:86:e6 ID:1,4a:d:e6:4e:86:e6 Lease:0x66f74f85}
	I0926 17:56:53.156897    4295 main.go:141] libmachine: (ha-476000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:be:55:44:c4:6b:bc ID:1,be:55:44:c4:6b:bc Lease:0x66f74de7}
	I0926 17:56:53.156904    4295 main.go:141] libmachine: (ha-476000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:7e:35:69:36:a6 ID:1,8a:7e:35:69:36:a6 Lease:0x66f74a6f}
	I0926 17:56:55.158233    4295 main.go:141] libmachine: (ha-476000-m05) DBG | Attempt 5
	I0926 17:56:55.158254    4295 main.go:141] libmachine: (ha-476000-m05) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:56:55.158435    4295 main.go:141] libmachine: (ha-476000-m05) DBG | hyperkit pid from json: 4305
	I0926 17:56:55.159507    4295 main.go:141] libmachine: (ha-476000-m05) DBG | Searching for a6:b7:dc:39:a:39 in /var/db/dhcpd_leases ...
	I0926 17:56:55.159599    4295 main.go:141] libmachine: (ha-476000-m05) DBG | Found 8 entries in /var/db/dhcpd_leases!
	I0926 17:56:55.159617    4295 main.go:141] libmachine: (ha-476000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:a6:b7:dc:39:a:39 ID:1,a6:b7:dc:39:a:39 Lease:0x66f75456}
	I0926 17:56:55.159643    4295 main.go:141] libmachine: (ha-476000-m05) DBG | Found match: a6:b7:dc:39:a:39
	I0926 17:56:55.159654    4295 main.go:141] libmachine: (ha-476000-m05) DBG | IP: 192.169.0.9
	I0926 17:56:55.159706    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetConfigRaw
	I0926 17:56:55.160438    4295 main.go:141] libmachine: (ha-476000-m05) Calling .DriverName
	I0926 17:56:55.160555    4295 main.go:141] libmachine: (ha-476000-m05) Calling .DriverName
	I0926 17:56:55.160683    4295 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0926 17:56:55.160693    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetState
	I0926 17:56:55.160817    4295 main.go:141] libmachine: (ha-476000-m05) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:56:55.160834    4295 main.go:141] libmachine: (ha-476000-m05) DBG | hyperkit pid from json: 4305
	I0926 17:56:55.161645    4295 main.go:141] libmachine: Detecting operating system of created instance...
	I0926 17:56:55.161659    4295 main.go:141] libmachine: Waiting for SSH to be available...
	I0926 17:56:55.161665    4295 main.go:141] libmachine: Getting to WaitForSSH function...
	I0926 17:56:55.161670    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHHostname
	I0926 17:56:55.161773    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHPort
	I0926 17:56:55.161862    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHKeyPath
	I0926 17:56:55.161971    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHKeyPath
	I0926 17:56:55.162061    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHUsername
	I0926 17:56:55.162222    4295 main.go:141] libmachine: Using SSH client type: native
	I0926 17:56:55.162439    4295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe84ed00] 0xe8519e0 <nil>  [] 0s} 192.169.0.9 22 <nil> <nil>}
	I0926 17:56:55.162447    4295 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0926 17:56:56.213723    4295 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 17:56:56.213737    4295 main.go:141] libmachine: Detecting the provisioner...
	I0926 17:56:56.213743    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHHostname
	I0926 17:56:56.213875    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHPort
	I0926 17:56:56.213983    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHKeyPath
	I0926 17:56:56.214096    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHKeyPath
	I0926 17:56:56.214183    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHUsername
	I0926 17:56:56.214330    4295 main.go:141] libmachine: Using SSH client type: native
	I0926 17:56:56.214479    4295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe84ed00] 0xe8519e0 <nil>  [] 0s} 192.169.0.9 22 <nil> <nil>}
	I0926 17:56:56.214486    4295 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0926 17:56:56.262479    4295 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0926 17:56:56.262541    4295 main.go:141] libmachine: found compatible host: buildroot
	I0926 17:56:56.262548    4295 main.go:141] libmachine: Provisioning with buildroot...
	I0926 17:56:56.262553    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetMachineName
	I0926 17:56:56.262687    4295 buildroot.go:166] provisioning hostname "ha-476000-m05"
	I0926 17:56:56.262698    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetMachineName
	I0926 17:56:56.262789    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHHostname
	I0926 17:56:56.262879    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHPort
	I0926 17:56:56.262969    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHKeyPath
	I0926 17:56:56.263044    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHKeyPath
	I0926 17:56:56.263118    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHUsername
	I0926 17:56:56.263260    4295 main.go:141] libmachine: Using SSH client type: native
	I0926 17:56:56.263387    4295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe84ed00] 0xe8519e0 <nil>  [] 0s} 192.169.0.9 22 <nil> <nil>}
	I0926 17:56:56.263395    4295 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-476000-m05 && echo "ha-476000-m05" | sudo tee /etc/hostname
	I0926 17:56:56.322246    4295 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-476000-m05
	
	I0926 17:56:56.322269    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHHostname
	I0926 17:56:56.322417    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHPort
	I0926 17:56:56.322540    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHKeyPath
	I0926 17:56:56.322637    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHKeyPath
	I0926 17:56:56.322739    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHUsername
	I0926 17:56:56.322898    4295 main.go:141] libmachine: Using SSH client type: native
	I0926 17:56:56.323055    4295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe84ed00] 0xe8519e0 <nil>  [] 0s} 192.169.0.9 22 <nil> <nil>}
	I0926 17:56:56.323066    4295 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-476000-m05' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-476000-m05/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-476000-m05' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0926 17:56:56.382791    4295 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 17:56:56.382813    4295 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19711-1128/.minikube CaCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19711-1128/.minikube}
	I0926 17:56:56.382825    4295 buildroot.go:174] setting up certificates
	I0926 17:56:56.382832    4295 provision.go:84] configureAuth start
	I0926 17:56:56.382839    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetMachineName
	I0926 17:56:56.382965    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetIP
	I0926 17:56:56.383051    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHHostname
	I0926 17:56:56.383135    4295 provision.go:143] copyHostCerts
	I0926 17:56:56.383168    4295 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem
	I0926 17:56:56.383223    4295 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem, removing ...
	I0926 17:56:56.383230    4295 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem
	I0926 17:56:56.383383    4295 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem (1082 bytes)
	I0926 17:56:56.383588    4295 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem
	I0926 17:56:56.383618    4295 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem, removing ...
	I0926 17:56:56.383623    4295 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem
	I0926 17:56:56.383703    4295 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem (1123 bytes)
	I0926 17:56:56.383855    4295 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem
	I0926 17:56:56.383896    4295 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem, removing ...
	I0926 17:56:56.383901    4295 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem
	I0926 17:56:56.383988    4295 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem (1675 bytes)
	I0926 17:56:56.384173    4295 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem org=jenkins.ha-476000-m05 san=[127.0.0.1 192.169.0.9 ha-476000-m05 localhost minikube]
	I0926 17:56:56.548494    4295 provision.go:177] copyRemoteCerts
	I0926 17:56:56.548556    4295 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0926 17:56:56.548572    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHHostname
	I0926 17:56:56.548711    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHPort
	I0926 17:56:56.548805    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHKeyPath
	I0926 17:56:56.548892    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHUsername
	I0926 17:56:56.548983    4295 sshutil.go:53] new ssh client: &{IP:192.169.0.9 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m05/id_rsa Username:docker}
	I0926 17:56:56.579965    4295 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0926 17:56:56.580039    4295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0926 17:56:56.599697    4295 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0926 17:56:56.599770    4295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0926 17:56:56.619893    4295 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0926 17:56:56.619962    4295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0926 17:56:56.639729    4295 provision.go:87] duration metric: took 256.884264ms to configureAuth
	I0926 17:56:56.639744    4295 buildroot.go:189] setting minikube options for container-runtime
	I0926 17:56:56.639918    4295 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:56:56.639931    4295 main.go:141] libmachine: (ha-476000-m05) Calling .DriverName
	I0926 17:56:56.640065    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHHostname
	I0926 17:56:56.640162    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHPort
	I0926 17:56:56.640249    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHKeyPath
	I0926 17:56:56.640329    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHKeyPath
	I0926 17:56:56.640409    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHUsername
	I0926 17:56:56.640545    4295 main.go:141] libmachine: Using SSH client type: native
	I0926 17:56:56.640673    4295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe84ed00] 0xe8519e0 <nil>  [] 0s} 192.169.0.9 22 <nil> <nil>}
	I0926 17:56:56.640681    4295 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0926 17:56:56.689180    4295 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0926 17:56:56.689197    4295 buildroot.go:70] root file system type: tmpfs
	I0926 17:56:56.689270    4295 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0926 17:56:56.689284    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHHostname
	I0926 17:56:56.689457    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHPort
	I0926 17:56:56.689549    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHKeyPath
	I0926 17:56:56.689653    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHKeyPath
	I0926 17:56:56.689727    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHUsername
	I0926 17:56:56.689852    4295 main.go:141] libmachine: Using SSH client type: native
	I0926 17:56:56.689996    4295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe84ed00] 0xe8519e0 <nil>  [] 0s} 192.169.0.9 22 <nil> <nil>}
	I0926 17:56:56.690040    4295 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0926 17:56:56.749803    4295 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0926 17:56:56.749825    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHHostname
	I0926 17:56:56.749955    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHPort
	I0926 17:56:56.750051    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHKeyPath
	I0926 17:56:56.750134    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHKeyPath
	I0926 17:56:56.750233    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHUsername
	I0926 17:56:56.750369    4295 main.go:141] libmachine: Using SSH client type: native
	I0926 17:56:56.750504    4295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe84ed00] 0xe8519e0 <nil>  [] 0s} 192.169.0.9 22 <nil> <nil>}
	I0926 17:56:56.750516    4295 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0926 17:56:58.275201    4295 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0926 17:56:58.275219    4295 main.go:141] libmachine: Checking connection to Docker...
	I0926 17:56:58.275225    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetURL
	I0926 17:56:58.275370    4295 main.go:141] libmachine: Docker is up and running!
	I0926 17:56:58.275378    4295 main.go:141] libmachine: Reticulating splines...
	I0926 17:56:58.275383    4295 client.go:171] duration metric: took 13.94295403s to LocalClient.Create
	I0926 17:56:58.275417    4295 start.go:167] duration metric: took 13.943020197s to libmachine.API.Create "ha-476000"
	I0926 17:56:58.275430    4295 start.go:293] postStartSetup for "ha-476000-m05" (driver="hyperkit")
	I0926 17:56:58.275443    4295 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0926 17:56:58.275455    4295 main.go:141] libmachine: (ha-476000-m05) Calling .DriverName
	I0926 17:56:58.275608    4295 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0926 17:56:58.275624    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHHostname
	I0926 17:56:58.275709    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHPort
	I0926 17:56:58.275824    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHKeyPath
	I0926 17:56:58.275933    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHUsername
	I0926 17:56:58.276021    4295 sshutil.go:53] new ssh client: &{IP:192.169.0.9 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m05/id_rsa Username:docker}
	I0926 17:56:58.311589    4295 ssh_runner.go:195] Run: cat /etc/os-release
	I0926 17:56:58.319272    4295 info.go:137] Remote host: Buildroot 2023.02.9
	I0926 17:56:58.319291    4295 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1128/.minikube/addons for local assets ...
	I0926 17:56:58.319417    4295 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1128/.minikube/files for local assets ...
	I0926 17:56:58.319611    4295 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> 16792.pem in /etc/ssl/certs
	I0926 17:56:58.319618    4295 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> /etc/ssl/certs/16792.pem
	I0926 17:56:58.319865    4295 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0926 17:56:58.329048    4295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem --> /etc/ssl/certs/16792.pem (1708 bytes)
	I0926 17:56:58.360696    4295 start.go:296] duration metric: took 85.250068ms for postStartSetup
	I0926 17:56:58.360725    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetConfigRaw
	I0926 17:56:58.361344    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetIP
	I0926 17:56:58.361519    4295 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/config.json ...
	I0926 17:56:58.361901    4295 start.go:128] duration metric: took 14.060111309s to createHost
	I0926 17:56:58.361926    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHHostname
	I0926 17:56:58.362035    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHPort
	I0926 17:56:58.362123    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHKeyPath
	I0926 17:56:58.362214    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHKeyPath
	I0926 17:56:58.362298    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHUsername
	I0926 17:56:58.362426    4295 main.go:141] libmachine: Using SSH client type: native
	I0926 17:56:58.362556    4295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe84ed00] 0xe8519e0 <nil>  [] 0s} 192.169.0.9 22 <nil> <nil>}
	I0926 17:56:58.362563    4295 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0926 17:56:58.411460    4295 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727398618.533830448
	
	I0926 17:56:58.411474    4295 fix.go:216] guest clock: 1727398618.533830448
	I0926 17:56:58.411481    4295 fix.go:229] Guest: 2024-09-26 17:56:58.533830448 -0700 PDT Remote: 2024-09-26 17:56:58.361909 -0700 PDT m=+14.305898801 (delta=171.921448ms)
	I0926 17:56:58.411504    4295 fix.go:200] guest clock delta is within tolerance: 171.921448ms
	I0926 17:56:58.411508    4295 start.go:83] releasing machines lock for "ha-476000-m05", held for 14.109832615s
	I0926 17:56:58.411526    4295 main.go:141] libmachine: (ha-476000-m05) Calling .DriverName
	I0926 17:56:58.411653    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetIP
	I0926 17:56:58.411739    4295 main.go:141] libmachine: (ha-476000-m05) Calling .DriverName
	I0926 17:56:58.412041    4295 main.go:141] libmachine: (ha-476000-m05) Calling .DriverName
	I0926 17:56:58.412133    4295 main.go:141] libmachine: (ha-476000-m05) Calling .DriverName
	I0926 17:56:58.412229    4295 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0926 17:56:58.412264    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHHostname
	I0926 17:56:58.412317    4295 ssh_runner.go:195] Run: systemctl --version
	I0926 17:56:58.412328    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHHostname
	I0926 17:56:58.412358    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHPort
	I0926 17:56:58.412413    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHPort
	I0926 17:56:58.412466    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHKeyPath
	I0926 17:56:58.412528    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHKeyPath
	I0926 17:56:58.412583    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHUsername
	I0926 17:56:58.412633    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetSSHUsername
	I0926 17:56:58.412689    4295 sshutil.go:53] new ssh client: &{IP:192.169.0.9 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m05/id_rsa Username:docker}
	I0926 17:56:58.412729    4295 sshutil.go:53] new ssh client: &{IP:192.169.0.9 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m05/id_rsa Username:docker}
	I0926 17:56:58.440271    4295 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0926 17:56:58.485396    4295 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0926 17:56:58.485486    4295 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0926 17:56:58.498657    4295 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0926 17:56:58.498673    4295 start.go:495] detecting cgroup driver to use...
	I0926 17:56:58.498787    4295 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 17:56:58.513898    4295 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0926 17:56:58.522421    4295 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0926 17:56:58.531203    4295 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0926 17:56:58.531277    4295 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0926 17:56:58.540269    4295 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 17:56:58.548743    4295 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0926 17:56:58.557226    4295 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 17:56:58.565721    4295 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0926 17:56:58.574451    4295 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0926 17:56:58.583022    4295 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0926 17:56:58.591686    4295 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0926 17:56:58.600348    4295 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0926 17:56:58.607951    4295 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0926 17:56:58.608004    4295 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0926 17:56:58.616642    4295 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0926 17:56:58.624939    4295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:56:58.726867    4295 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0926 17:56:58.746098    4295 start.go:495] detecting cgroup driver to use...
	I0926 17:56:58.746203    4295 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0926 17:56:58.765362    4295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 17:56:58.776020    4295 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0926 17:56:58.788597    4295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 17:56:58.799498    4295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 17:56:58.809893    4295 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0926 17:56:58.833692    4295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 17:56:58.844951    4295 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 17:56:58.859869    4295 ssh_runner.go:195] Run: which cri-dockerd
	I0926 17:56:58.862676    4295 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0926 17:56:58.870713    4295 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0926 17:56:58.883975    4295 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0926 17:56:58.984550    4295 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0926 17:56:59.080555    4295 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0926 17:56:59.080627    4295 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0926 17:56:59.097498    4295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:56:59.195879    4295 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0926 17:57:01.482009    4295 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.286103149s)
	I0926 17:57:01.482080    4295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0926 17:57:01.493293    4295 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0926 17:57:01.507819    4295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0926 17:57:01.519335    4295 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0926 17:57:01.621273    4295 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0926 17:57:01.746125    4295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:57:01.858403    4295 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0926 17:57:01.871610    4295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0926 17:57:01.882549    4295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:57:01.982706    4295 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0926 17:57:02.041144    4295 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0926 17:57:02.041236    4295 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0926 17:57:02.046193    4295 start.go:563] Will wait 60s for crictl version
	I0926 17:57:02.046260    4295 ssh_runner.go:195] Run: which crictl
	I0926 17:57:02.049492    4295 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0926 17:57:02.080342    4295 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I0926 17:57:02.080428    4295 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0926 17:57:02.097100    4295 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0926 17:57:02.138107    4295 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I0926 17:57:02.138183    4295 main.go:141] libmachine: (ha-476000-m05) Calling .GetIP
	I0926 17:57:02.138592    4295 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0926 17:57:02.143417    4295 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 17:57:02.153716    4295 mustload.go:65] Loading cluster: ha-476000
	I0926 17:57:02.153907    4295 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:57:02.154142    4295 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:57:02.154166    4295 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:57:02.162701    4295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52239
	I0926 17:57:02.163046    4295 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:57:02.163400    4295 main.go:141] libmachine: Using API Version  1
	I0926 17:57:02.163415    4295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:57:02.163635    4295 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:57:02.163737    4295 main.go:141] libmachine: (ha-476000) Calling .GetState
	I0926 17:57:02.163817    4295 main.go:141] libmachine: (ha-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:57:02.163882    4295 main.go:141] libmachine: (ha-476000) DBG | hyperkit pid from json: 4191
	I0926 17:57:02.164854    4295 host.go:66] Checking if "ha-476000" exists ...
	I0926 17:57:02.165118    4295 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:57:02.165143    4295 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:57:02.173462    4295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52241
	I0926 17:57:02.173784    4295 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:57:02.174138    4295 main.go:141] libmachine: Using API Version  1
	I0926 17:57:02.174153    4295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:57:02.174349    4295 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:57:02.174461    4295 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:57:02.174563    4295 certs.go:68] Setting up /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000 for IP: 192.169.0.9
	I0926 17:57:02.174571    4295 certs.go:194] generating shared ca certs ...
	I0926 17:57:02.174585    4295 certs.go:226] acquiring lock for ca certs: {Name:mk3d4665cb5756ae49ea6f7cd2a58d273aecc507 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:57:02.174750    4295 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.key
	I0926 17:57:02.174819    4295 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.key
	I0926 17:57:02.174831    4295 certs.go:256] generating profile certs ...
	I0926 17:57:02.174932    4295 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/client.key
	I0926 17:57:02.174952    4295 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key.d08ccda8
	I0926 17:57:02.174968    4295 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt.d08ccda8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.9 192.169.0.254]
	I0926 17:57:02.218844    4295 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt.d08ccda8 ...
	I0926 17:57:02.218863    4295 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt.d08ccda8: {Name:mkb06d9874819fc3786316dda5d457fc4befcb2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:57:02.219259    4295 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key.d08ccda8 ...
	I0926 17:57:02.219269    4295 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key.d08ccda8: {Name:mkf088abc04d25a4b7bb88526ffc81aceacbbb4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:57:02.219519    4295 certs.go:381] copying /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt.d08ccda8 -> /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt
	I0926 17:57:02.219766    4295 certs.go:385] copying /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key.d08ccda8 -> /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key
	I0926 17:57:02.220032    4295 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.key
	I0926 17:57:02.220043    4295 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0926 17:57:02.220068    4295 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0926 17:57:02.220090    4295 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0926 17:57:02.220111    4295 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0926 17:57:02.220131    4295 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0926 17:57:02.220153    4295 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0926 17:57:02.220174    4295 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0926 17:57:02.220193    4295 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0926 17:57:02.220298    4295 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679.pem (1338 bytes)
	W0926 17:57:02.220351    4295 certs.go:480] ignoring /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679_empty.pem, impossibly tiny 0 bytes
	I0926 17:57:02.220360    4295 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem (1675 bytes)
	I0926 17:57:02.220405    4295 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem (1082 bytes)
	I0926 17:57:02.220442    4295 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem (1123 bytes)
	I0926 17:57:02.220494    4295 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem (1675 bytes)
	I0926 17:57:02.220569    4295 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem (1708 bytes)
	I0926 17:57:02.220609    4295 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679.pem -> /usr/share/ca-certificates/1679.pem
	I0926 17:57:02.220632    4295 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> /usr/share/ca-certificates/16792.pem
	I0926 17:57:02.220653    4295 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:57:02.220692    4295 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:57:02.220842    4295 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:57:02.220949    4295 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:57:02.221047    4295 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:57:02.221143    4295 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/id_rsa Username:docker}
	I0926 17:57:02.252824    4295 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0926 17:57:02.256445    4295 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0926 17:57:02.265429    4295 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0926 17:57:02.268607    4295 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0926 17:57:02.279909    4295 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0926 17:57:02.283106    4295 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0926 17:57:02.292670    4295 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0926 17:57:02.295843    4295 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0926 17:57:02.305037    4295 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0926 17:57:02.308502    4295 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0926 17:57:02.317327    4295 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0926 17:57:02.320898    4295 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0926 17:57:02.330001    4295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0926 17:57:02.351244    4295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0926 17:57:02.371950    4295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0926 17:57:02.392090    4295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0926 17:57:02.413830    4295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1452 bytes)
	I0926 17:57:02.434733    4295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0926 17:57:02.455875    4295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0926 17:57:02.475941    4295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0926 17:57:02.496983    4295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679.pem --> /usr/share/ca-certificates/1679.pem (1338 bytes)
	I0926 17:57:02.517454    4295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem --> /usr/share/ca-certificates/16792.pem (1708 bytes)
	I0926 17:57:02.537691    4295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0926 17:57:02.558648    4295 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0926 17:57:02.573097    4295 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0926 17:57:02.586817    4295 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0926 17:57:02.601004    4295 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0926 17:57:02.614931    4295 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0926 17:57:02.628857    4295 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0926 17:57:02.642394    4295 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0926 17:57:02.656456    4295 ssh_runner.go:195] Run: openssl version
	I0926 17:57:02.660804    4295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16792.pem && ln -fs /usr/share/ca-certificates/16792.pem /etc/ssl/certs/16792.pem"
	I0926 17:57:02.669264    4295 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16792.pem
	I0926 17:57:02.672850    4295 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:36 /usr/share/ca-certificates/16792.pem
	I0926 17:57:02.672914    4295 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16792.pem
	I0926 17:57:02.677274    4295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16792.pem /etc/ssl/certs/3ec20f2e.0"
	I0926 17:57:02.685690    4295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0926 17:57:02.694128    4295 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:57:02.698463    4295 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:14 /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:57:02.698535    4295 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:57:02.702905    4295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0926 17:57:02.711276    4295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1679.pem && ln -fs /usr/share/ca-certificates/1679.pem /etc/ssl/certs/1679.pem"
	I0926 17:57:02.720636    4295 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1679.pem
	I0926 17:57:02.724279    4295 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:36 /usr/share/ca-certificates/1679.pem
	I0926 17:57:02.724345    4295 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1679.pem
	I0926 17:57:02.729146    4295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1679.pem /etc/ssl/certs/51391683.0"
	I0926 17:57:02.739160    4295 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0926 17:57:02.742309    4295 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0926 17:57:02.742358    4295 kubeadm.go:934] updating node {m05 192.169.0.9 8443 v1.31.1  true true} ...
	I0926 17:57:02.742448    4295 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-476000-m05 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.9
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-476000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0926 17:57:02.742471    4295 kube-vip.go:115] generating kube-vip config ...
	I0926 17:57:02.742516    4295 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0926 17:57:02.756050    4295 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0926 17:57:02.756126    4295 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0926 17:57:02.756193    4295 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0926 17:57:02.764361    4295 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0926 17:57:02.764412    4295 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0926 17:57:02.772776    4295 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0926 17:57:02.772776    4295 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0926 17:57:02.772799    4295 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0926 17:57:02.772776    4295 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0926 17:57:02.772819    4295 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0926 17:57:02.772821    4295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 17:57:02.772896    4295 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0926 17:57:02.772914    4295 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0926 17:57:02.785002    4295 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0926 17:57:02.785057    4295 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0926 17:57:02.785066    4295 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0926 17:57:02.785074    4295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0926 17:57:02.785090    4295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0926 17:57:02.785143    4295 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0926 17:57:02.797459    4295 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0926 17:57:02.797505    4295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0926 17:57:03.647735    4295 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0926 17:57:03.656022    4295 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0926 17:57:03.669746    4295 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0926 17:57:03.683757    4295 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0926 17:57:03.697497    4295 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0926 17:57:03.700518    4295 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 17:57:03.710689    4295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:57:03.816702    4295 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 17:57:03.832186    4295 host.go:66] Checking if "ha-476000" exists ...
	I0926 17:57:03.832482    4295 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:57:03.832506    4295 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:57:03.843048    4295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52244
	I0926 17:57:03.843405    4295 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:57:03.843753    4295 main.go:141] libmachine: Using API Version  1
	I0926 17:57:03.843770    4295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:57:03.843972    4295 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:57:03.844083    4295 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:57:03.844179    4295 start.go:317] joinCluster: &{Name:ha-476000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clu
sterName:ha-476000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m05 IP:192.169.0.9 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:f
alse cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 17:57:03.844285    4295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0926 17:57:03.844302    4295 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:57:03.844386    4295 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:57:03.844473    4295 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:57:03.844584    4295 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:57:03.844661    4295 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/id_rsa Username:docker}
	I0926 17:57:03.995947    4295 start.go:343] trying to join control-plane node "m05" to cluster: &{Name:m05 IP:192.169.0.9 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:true Worker:true}
	I0926 17:57:03.995978    4295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4d6686.t7xivt9r2j1ountj --discovery-token-ca-cert-hash sha256:cde4a9f1d02b17efd71ed93d12958f8464596b5619fa355326db09dcf9a7790d --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-476000-m05 --control-plane --apiserver-advertise-address=192.169.0.9 --apiserver-bind-port=8443"
	I0926 17:59:27.922051    4295 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4d6686.t7xivt9r2j1ountj --discovery-token-ca-cert-hash sha256:cde4a9f1d02b17efd71ed93d12958f8464596b5619fa355326db09dcf9a7790d --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-476000-m05 --control-plane --apiserver-advertise-address=192.169.0.9 --apiserver-bind-port=8443": (2m23.925515869s)
	E0926 17:59:27.922104    4295 start.go:345] control-plane node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4d6686.t7xivt9r2j1ountj --discovery-token-ca-cert-hash sha256:cde4a9f1d02b17efd71ed93d12958f8464596b5619fa355326db09dcf9a7790d --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-476000-m05 --control-plane --apiserver-advertise-address=192.169.0.9 --apiserver-bind-port=8443": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	[preflight] Running pre-flight checks before initializing the new control plane instance
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-476000-m05 localhost] and IPs [192.169.0.9 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-476000-m05 localhost] and IPs [192.169.0.9 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Using the existing "apiserver" certificate and key
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Valid certificates and keys now exist in "/var/lib/minikube/certs"
	[certs] Using the existing "sa" key
	[kubeconfig] Generating kubeconfig files
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[check-etcd] Checking that the etcd cluster is healthy
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase check-etcd: etcd cluster is not healthy: failed to dial endpoint https://192.169.0.7:2379 with maintenance client: context deadline exceeded
	To see the stack trace of this error execute with --v=5 or higher
	I0926 17:59:27.922132    4295 start.go:348] resetting control-plane node "m05" before attempting to rejoin cluster...
	I0926 17:59:27.922147    4295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --force"
	I0926 17:59:27.996716    4295 start.go:352] successfully reset control-plane node "m05"
	I0926 17:59:27.996754    4295 retry.go:31] will retry after 11.072061196s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4d6686.t7xivt9r2j1ountj --discovery-token-ca-cert-hash sha256:cde4a9f1d02b17efd71ed93d12958f8464596b5619fa355326db09dcf9a7790d --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-476000-m05 --control-plane --apiserver-advertise-address=192.169.0.9 --apiserver-bind-port=8443": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	[preflight] Running pre-flight checks before initializing the new control plane instance
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-476000-m05 localhost] and IPs [192.169.0.9 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-476000-m05 localhost] and IPs [192.169.0.9 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Using the existing "apiserver" certificate and key
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Valid certificates and keys now exist in "/var/lib/minikube/certs"
	[certs] Using the existing "sa" key
	[kubeconfig] Generating kubeconfig files
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[check-etcd] Checking that the etcd cluster is healthy
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase check-etcd: etcd cluster is not healthy: failed to dial endpoint https://192.169.0.7:2379 with maintenance client: context deadline exceeded
	To see the stack trace of this error execute with --v=5 or higher
	I0926 17:59:39.069358    4295 start.go:343] trying to join control-plane node "m05" to cluster: &{Name:m05 IP:192.169.0.9 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:true Worker:true}
	I0926 17:59:39.069443    4295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4d6686.t7xivt9r2j1ountj --discovery-token-ca-cert-hash sha256:cde4a9f1d02b17efd71ed93d12958f8464596b5619fa355326db09dcf9a7790d --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-476000-m05 --control-plane --apiserver-advertise-address=192.169.0.9 --apiserver-bind-port=8443"
	I0926 18:01:42.147215    4295 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4d6686.t7xivt9r2j1ountj --discovery-token-ca-cert-hash sha256:cde4a9f1d02b17efd71ed93d12958f8464596b5619fa355326db09dcf9a7790d --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-476000-m05 --control-plane --apiserver-advertise-address=192.169.0.9 --apiserver-bind-port=8443": (2m3.040061855s)
	E0926 18:01:42.147254    4295 start.go:345] control-plane node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4d6686.t7xivt9r2j1ountj --discovery-token-ca-cert-hash sha256:cde4a9f1d02b17efd71ed93d12958f8464596b5619fa355326db09dcf9a7790d --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-476000-m05 --control-plane --apiserver-advertise-address=192.169.0.9 --apiserver-bind-port=8443": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	[preflight] Running pre-flight checks before initializing the new control plane instance
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using the existing "apiserver" certificate and key
	[certs] Using the existing "apiserver-kubelet-client" certificate and key
	[certs] Using the existing "front-proxy-client" certificate and key
	[certs] Using the existing "etcd/healthcheck-client" certificate and key
	[certs] Using the existing "etcd/server" certificate and key
	[certs] Using the existing "etcd/peer" certificate and key
	[certs] Using the existing "apiserver-etcd-client" certificate and key
	[certs] Valid certificates and keys now exist in "/var/lib/minikube/certs"
	[certs] Using the existing "sa" key
	[kubeconfig] Generating kubeconfig files
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[check-etcd] Checking that the etcd cluster is healthy
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase check-etcd: etcd cluster is not healthy: failed to dial endpoint https://192.169.0.7:2379 with maintenance client: context deadline exceeded
	To see the stack trace of this error execute with --v=5 or higher
	I0926 18:01:42.147265    4295 start.go:348] resetting control-plane node "m05" before attempting to rejoin cluster...
	I0926 18:01:42.147273    4295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --force"
	I0926 18:01:42.217925    4295 start.go:352] successfully reset control-plane node "m05"
	I0926 18:01:42.217958    4295 start.go:319] duration metric: took 4m38.33551695s to joinCluster
	I0926 18:01:42.240332    4295 out.go:201] 
	W0926 18:01:42.259675    4295 out.go:270] X Exiting due to GUEST_NODE_ADD: failed to add node: join node to cluster: error joining control-plane node "m05" to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4d6686.t7xivt9r2j1ountj --discovery-token-ca-cert-hash sha256:cde4a9f1d02b17efd71ed93d12958f8464596b5619fa355326db09dcf9a7790d --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-476000-m05 --control-plane --apiserver-advertise-address=192.169.0.9 --apiserver-bind-port=8443": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	[preflight] Running pre-flight checks before initializing the new control plane instance
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using the existing "apiserver" certificate and key
	[certs] Using the existing "apiserver-kubelet-client" certificate and key
	[certs] Using the existing "front-proxy-client" certificate and key
	[certs] Using the existing "etcd/healthcheck-client" certificate and key
	[certs] Using the existing "etcd/server" certificate and key
	[certs] Using the existing "etcd/peer" certificate and key
	[certs] Using the existing "apiserver-etcd-client" certificate and key
	[certs] Valid certificates and keys now exist in "/var/lib/minikube/certs"
	[certs] Using the existing "sa" key
	[kubeconfig] Generating kubeconfig files
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[check-etcd] Checking that the etcd cluster is healthy
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase check-etcd: etcd cluster is not healthy: failed to dial endpoint https://192.169.0.7:2379 with maintenance client: context deadline exceeded
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_NODE_ADD: failed to add node: join node to cluster: error joining control-plane node "m05" to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4d6686.t7xivt9r2j1ountj --discovery-token-ca-cert-hash sha256:cde4a9f1d02b17efd71ed93d12958f8464596b5619fa355326db09dcf9a7790d --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-476000-m05 --control-plane --apiserver-advertise-address=192.169.0.9 --apiserver-bind-port=8443": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	[preflight] Running pre-flight checks before initializing the new control plane instance
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using the existing "apiserver" certificate and key
	[certs] Using the existing "apiserver-kubelet-client" certificate and key
	[certs] Using the existing "front-proxy-client" certificate and key
	[certs] Using the existing "etcd/healthcheck-client" certificate and key
	[certs] Using the existing "etcd/server" certificate and key
	[certs] Using the existing "etcd/peer" certificate and key
	[certs] Using the existing "apiserver-etcd-client" certificate and key
	[certs] Valid certificates and keys now exist in "/var/lib/minikube/certs"
	[certs] Using the existing "sa" key
	[kubeconfig] Generating kubeconfig files
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[check-etcd] Checking that the etcd cluster is healthy
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase check-etcd: etcd cluster is not healthy: failed to dial endpoint https://192.169.0.7:2379 with maintenance client: context deadline exceeded
	To see the stack trace of this error execute with --v=5 or higher
	
	W0926 18:01:42.259709    4295 out.go:270] * 
	* 
	W0926 18:01:42.263355    4295 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 18:01:42.283546    4295 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-amd64 node add -p ha-476000 --control-plane -v=7 --alsologtostderr" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-476000 -n ha-476000
helpers_test.go:244: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-476000 logs -n 25: (3.500514482s)
helpers_test.go:252: TestMultiControlPlane/serial/AddSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-476000 ssh -n                                                                                                             | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | ha-476000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-476000 ssh -n ha-476000-m04 sudo cat                                                                                      | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | /home/docker/cp-test_ha-476000-m03_ha-476000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-476000 cp testdata/cp-test.txt                                                                                            | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | ha-476000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-476000 ssh -n                                                                                                             | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | ha-476000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-476000 cp ha-476000-m04:/home/docker/cp-test.txt                                                                          | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile3898402723/001/cp-test_ha-476000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-476000 ssh -n                                                                                                             | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | ha-476000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-476000 cp ha-476000-m04:/home/docker/cp-test.txt                                                                          | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | ha-476000:/home/docker/cp-test_ha-476000-m04_ha-476000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-476000 ssh -n                                                                                                             | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | ha-476000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-476000 ssh -n ha-476000 sudo cat                                                                                          | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | /home/docker/cp-test_ha-476000-m04_ha-476000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-476000 cp ha-476000-m04:/home/docker/cp-test.txt                                                                          | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | ha-476000-m02:/home/docker/cp-test_ha-476000-m04_ha-476000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-476000 ssh -n                                                                                                             | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | ha-476000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-476000 ssh -n ha-476000-m02 sudo cat                                                                                      | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | /home/docker/cp-test_ha-476000-m04_ha-476000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-476000 cp ha-476000-m04:/home/docker/cp-test.txt                                                                          | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | ha-476000-m03:/home/docker/cp-test_ha-476000-m04_ha-476000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-476000 ssh -n                                                                                                             | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | ha-476000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-476000 ssh -n ha-476000-m03 sudo cat                                                                                      | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | /home/docker/cp-test_ha-476000-m04_ha-476000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-476000 node stop m02 -v=7                                                                                                 | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-476000 node start m02 -v=7                                                                                                | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:47 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-476000 -v=7                                                                                                       | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:47 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-476000 -v=7                                                                                                            | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:47 PDT | 26 Sep 24 17:47 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-476000 --wait=true -v=7                                                                                                | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:47 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-476000                                                                                                            | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:49 PDT |                     |
	| node    | ha-476000 node delete m03 -v=7                                                                                               | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:49 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | ha-476000 stop -v=7                                                                                                          | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:49 PDT | 26 Sep 24 17:53 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-476000 --wait=true                                                                                                     | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:53 PDT |                     |
	|         | -v=7 --alsologtostderr                                                                                                       |           |         |         |                     |                     |
	|         | --driver=hyperkit                                                                                                            |           |         |         |                     |                     |
	| node    | add -p ha-476000                                                                                                             | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:56 PDT |                     |
	|         | --control-plane -v=7                                                                                                         |           |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/26 17:53:00
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 17:53:00.467998    4178 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:53:00.468247    4178 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:53:00.468252    4178 out.go:358] Setting ErrFile to fd 2...
	I0926 17:53:00.468256    4178 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:53:00.468436    4178 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1128/.minikube/bin
	I0926 17:53:00.469901    4178 out.go:352] Setting JSON to false
	I0926 17:53:00.492370    4178 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3150,"bootTime":1727395230,"procs":437,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0926 17:53:00.492530    4178 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 17:53:00.514400    4178 out.go:177] * [ha-476000] minikube v1.34.0 on Darwin 14.6.1
	I0926 17:53:00.557228    4178 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 17:53:00.557300    4178 notify.go:220] Checking for updates...
	I0926 17:53:00.599719    4178 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1128/kubeconfig
	I0926 17:53:00.621009    4178 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0926 17:53:00.642091    4178 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 17:53:00.662936    4178 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1128/.minikube
	I0926 17:53:00.684204    4178 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 17:53:00.705550    4178 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:53:00.706120    4178 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:53:00.706169    4178 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:53:00.715431    4178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52037
	I0926 17:53:00.715807    4178 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:53:00.716207    4178 main.go:141] libmachine: Using API Version  1
	I0926 17:53:00.716243    4178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:53:00.716493    4178 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:53:00.716626    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:00.716833    4178 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 17:53:00.717101    4178 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:53:00.717132    4178 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:53:00.725380    4178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52039
	I0926 17:53:00.725706    4178 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:53:00.726059    4178 main.go:141] libmachine: Using API Version  1
	I0926 17:53:00.726076    4178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:53:00.726325    4178 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:53:00.726449    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:00.754773    4178 out.go:177] * Using the hyperkit driver based on existing profile
	I0926 17:53:00.797071    4178 start.go:297] selected driver: hyperkit
	I0926 17:53:00.797101    4178 start.go:901] validating driver "hyperkit" against &{Name:ha-476000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:ha-476000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclas
s:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 17:53:00.797347    4178 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 17:53:00.797543    4178 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 17:53:00.797758    4178 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19711-1128/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0926 17:53:00.807380    4178 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0926 17:53:00.811121    4178 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:53:00.811145    4178 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0926 17:53:00.813743    4178 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 17:53:00.813780    4178 cni.go:84] Creating CNI manager for ""
	I0926 17:53:00.813817    4178 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0926 17:53:00.813892    4178 start.go:340] cluster config:
	{Name:ha-476000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-476000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inacce
l:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 17:53:00.814010    4178 iso.go:125] acquiring lock: {Name:mka8a9c5a237c1e4ae233281d2ff7965d13a843d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 17:53:00.856015    4178 out.go:177] * Starting "ha-476000" primary control-plane node in "ha-476000" cluster
	I0926 17:53:00.877127    4178 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 17:53:00.877240    4178 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0926 17:53:00.877263    4178 cache.go:56] Caching tarball of preloaded images
	I0926 17:53:00.877457    4178 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0926 17:53:00.877476    4178 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 17:53:00.877658    4178 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/config.json ...
	I0926 17:53:00.878610    4178 start.go:360] acquireMachinesLock for ha-476000: {Name:mk62b0ec9b6170d1971239573141a625c4a2621e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 17:53:00.878759    4178 start.go:364] duration metric: took 97.008µs to acquireMachinesLock for "ha-476000"
	I0926 17:53:00.878828    4178 start.go:96] Skipping create...Using existing machine configuration
	I0926 17:53:00.878843    4178 fix.go:54] fixHost starting: 
	I0926 17:53:00.879324    4178 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:53:00.879362    4178 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:53:00.888435    4178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52041
	I0926 17:53:00.888799    4178 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:53:00.889164    4178 main.go:141] libmachine: Using API Version  1
	I0926 17:53:00.889177    4178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:53:00.889396    4178 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:53:00.889518    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:00.889616    4178 main.go:141] libmachine: (ha-476000) Calling .GetState
	I0926 17:53:00.889695    4178 main.go:141] libmachine: (ha-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:53:00.889775    4178 main.go:141] libmachine: (ha-476000) DBG | hyperkit pid from json: 4068
	I0926 17:53:00.890689    4178 main.go:141] libmachine: (ha-476000) DBG | hyperkit pid 4068 missing from process table
	I0926 17:53:00.890720    4178 fix.go:112] recreateIfNeeded on ha-476000: state=Stopped err=<nil>
	I0926 17:53:00.890735    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	W0926 17:53:00.890819    4178 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 17:53:00.933253    4178 out.go:177] * Restarting existing hyperkit VM for "ha-476000" ...
	I0926 17:53:00.956221    4178 main.go:141] libmachine: (ha-476000) Calling .Start
	I0926 17:53:00.956482    4178 main.go:141] libmachine: (ha-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:53:00.956522    4178 main.go:141] libmachine: (ha-476000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/hyperkit.pid
	I0926 17:53:00.958313    4178 main.go:141] libmachine: (ha-476000) DBG | hyperkit pid 4068 missing from process table
	I0926 17:53:00.958323    4178 main.go:141] libmachine: (ha-476000) DBG | pid 4068 is in state "Stopped"
	I0926 17:53:00.958337    4178 main.go:141] libmachine: (ha-476000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/hyperkit.pid...
	I0926 17:53:00.958705    4178 main.go:141] libmachine: (ha-476000) DBG | Using UUID 99cfbb80-3e9d-4d4f-a72a-447af4e3b1db
	I0926 17:53:01.067490    4178 main.go:141] libmachine: (ha-476000) DBG | Generated MAC 96:a2:4a:f3:be:4a
	I0926 17:53:01.067521    4178 main.go:141] libmachine: (ha-476000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000
	I0926 17:53:01.067590    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"99cfbb80-3e9d-4d4f-a72a-447af4e3b1db", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bea80)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0926 17:53:01.067614    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"99cfbb80-3e9d-4d4f-a72a-447af4e3b1db", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bea80)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0926 17:53:01.067680    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "99cfbb80-3e9d-4d4f-a72a-447af4e3b1db", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/ha-476000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/bzimage,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000"}
	I0926 17:53:01.067717    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 99cfbb80-3e9d-4d4f-a72a-447af4e3b1db -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/ha-476000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/console-ring -f kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/bzimage,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000"
	I0926 17:53:01.067731    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0926 17:53:01.069340    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 DEBUG: hyperkit: Pid is 4191
	I0926 17:53:01.069679    4178 main.go:141] libmachine: (ha-476000) DBG | Attempt 0
	I0926 17:53:01.069693    4178 main.go:141] libmachine: (ha-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:53:01.069753    4178 main.go:141] libmachine: (ha-476000) DBG | hyperkit pid from json: 4191
	I0926 17:53:01.071639    4178 main.go:141] libmachine: (ha-476000) DBG | Searching for 96:a2:4a:f3:be:4a in /var/db/dhcpd_leases ...
	I0926 17:53:01.071694    4178 main.go:141] libmachine: (ha-476000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0926 17:53:01.071711    4178 main.go:141] libmachine: (ha-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f7523f}
	I0926 17:53:01.071719    4178 main.go:141] libmachine: (ha-476000) DBG | Found match: 96:a2:4a:f3:be:4a
	I0926 17:53:01.071724    4178 main.go:141] libmachine: (ha-476000) DBG | IP: 192.169.0.5
	I0926 17:53:01.071801    4178 main.go:141] libmachine: (ha-476000) Calling .GetConfigRaw
	I0926 17:53:01.072466    4178 main.go:141] libmachine: (ha-476000) Calling .GetIP
	I0926 17:53:01.072682    4178 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/config.json ...
	I0926 17:53:01.073265    4178 machine.go:93] provisionDockerMachine start ...
	I0926 17:53:01.073276    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:01.073432    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:01.073553    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:01.073654    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:01.073744    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:01.073824    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:01.073962    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:01.074151    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0926 17:53:01.074160    4178 main.go:141] libmachine: About to run SSH command:
	hostname
	I0926 17:53:01.077803    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I0926 17:53:01.131821    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0926 17:53:01.132498    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 17:53:01.132519    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 17:53:01.132527    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 17:53:01.132535    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 17:53:01.515934    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0926 17:53:01.515948    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0926 17:53:01.630853    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 17:53:01.630870    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 17:53:01.630880    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 17:53:01.630889    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 17:53:01.631762    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0926 17:53:01.631773    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0926 17:53:07.224844    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:07 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0926 17:53:07.224979    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:07 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0926 17:53:07.224989    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:07 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0926 17:53:07.249067    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:07 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0926 17:53:12.148094    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0926 17:53:12.148109    4178 main.go:141] libmachine: (ha-476000) Calling .GetMachineName
	I0926 17:53:12.148318    4178 buildroot.go:166] provisioning hostname "ha-476000"
	I0926 17:53:12.148328    4178 main.go:141] libmachine: (ha-476000) Calling .GetMachineName
	I0926 17:53:12.148430    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:12.148546    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:12.148649    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.148741    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.148844    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:12.148986    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:12.149192    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0926 17:53:12.149200    4178 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-476000 && echo "ha-476000" | sudo tee /etc/hostname
	I0926 17:53:12.225889    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-476000
	
	I0926 17:53:12.225907    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:12.226039    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:12.226125    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.226235    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.226313    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:12.226463    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:12.226601    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0926 17:53:12.226612    4178 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-476000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-476000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-476000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0926 17:53:12.298491    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 17:53:12.298512    4178 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19711-1128/.minikube CaCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19711-1128/.minikube}
	I0926 17:53:12.298531    4178 buildroot.go:174] setting up certificates
	I0926 17:53:12.298537    4178 provision.go:84] configureAuth start
	I0926 17:53:12.298544    4178 main.go:141] libmachine: (ha-476000) Calling .GetMachineName
	I0926 17:53:12.298672    4178 main.go:141] libmachine: (ha-476000) Calling .GetIP
	I0926 17:53:12.298777    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:12.298858    4178 provision.go:143] copyHostCerts
	I0926 17:53:12.298890    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem
	I0926 17:53:12.298959    4178 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem, removing ...
	I0926 17:53:12.298968    4178 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem
	I0926 17:53:12.299110    4178 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem (1082 bytes)
	I0926 17:53:12.299320    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem
	I0926 17:53:12.299359    4178 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem, removing ...
	I0926 17:53:12.299364    4178 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem
	I0926 17:53:12.299452    4178 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem (1123 bytes)
	I0926 17:53:12.299596    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem
	I0926 17:53:12.299633    4178 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem, removing ...
	I0926 17:53:12.299638    4178 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem
	I0926 17:53:12.299717    4178 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem (1675 bytes)
	I0926 17:53:12.299883    4178 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem org=jenkins.ha-476000 san=[127.0.0.1 192.169.0.5 ha-476000 localhost minikube]
	I0926 17:53:12.619231    4178 provision.go:177] copyRemoteCerts
	I0926 17:53:12.619306    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0926 17:53:12.619328    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:12.619499    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:12.619617    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.619721    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:12.619805    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/id_rsa Username:docker}
	I0926 17:53:12.659598    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0926 17:53:12.659672    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0926 17:53:12.679552    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0926 17:53:12.679620    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0926 17:53:12.699069    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0926 17:53:12.699141    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0926 17:53:12.718755    4178 provision.go:87] duration metric: took 420.20261ms to configureAuth
	I0926 17:53:12.718767    4178 buildroot.go:189] setting minikube options for container-runtime
	I0926 17:53:12.718921    4178 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:53:12.718934    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:12.719072    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:12.719167    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:12.719255    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.719341    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.719422    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:12.719544    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:12.719669    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0926 17:53:12.719676    4178 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0926 17:53:12.785771    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0926 17:53:12.785788    4178 buildroot.go:70] root file system type: tmpfs
	I0926 17:53:12.785872    4178 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0926 17:53:12.785886    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:12.786022    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:12.786110    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.786193    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.786273    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:12.786415    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:12.786558    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0926 17:53:12.786601    4178 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0926 17:53:12.862455    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0926 17:53:12.862477    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:12.862607    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:12.862705    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.862800    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.862882    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:12.863016    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:12.863156    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0926 17:53:12.863169    4178 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0926 17:53:14.510518    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0926 17:53:14.510534    4178 machine.go:96] duration metric: took 13.437211612s to provisionDockerMachine
	I0926 17:53:14.510545    4178 start.go:293] postStartSetup for "ha-476000" (driver="hyperkit")
	I0926 17:53:14.510553    4178 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0926 17:53:14.510563    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:14.510765    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0926 17:53:14.510780    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:14.510875    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:14.510981    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:14.511085    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:14.511186    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/id_rsa Username:docker}
	I0926 17:53:14.553095    4178 ssh_runner.go:195] Run: cat /etc/os-release
	I0926 17:53:14.556852    4178 info.go:137] Remote host: Buildroot 2023.02.9
	I0926 17:53:14.556867    4178 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1128/.minikube/addons for local assets ...
	I0926 17:53:14.556973    4178 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1128/.minikube/files for local assets ...
	I0926 17:53:14.557159    4178 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> 16792.pem in /etc/ssl/certs
	I0926 17:53:14.557167    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> /etc/ssl/certs/16792.pem
	I0926 17:53:14.557383    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0926 17:53:14.567060    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem --> /etc/ssl/certs/16792.pem (1708 bytes)
	I0926 17:53:14.600616    4178 start.go:296] duration metric: took 90.060103ms for postStartSetup
	I0926 17:53:14.600637    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:14.600819    4178 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0926 17:53:14.600832    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:14.600912    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:14.600992    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:14.601061    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:14.601150    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/id_rsa Username:docker}
	I0926 17:53:14.640650    4178 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0926 17:53:14.640716    4178 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0926 17:53:14.694957    4178 fix.go:56] duration metric: took 13.816065248s for fixHost
	I0926 17:53:14.694980    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:14.695115    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:14.695206    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:14.695301    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:14.695399    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:14.695527    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:14.695674    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0926 17:53:14.695682    4178 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0926 17:53:14.760098    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727398394.872717718
	
	I0926 17:53:14.760109    4178 fix.go:216] guest clock: 1727398394.872717718
	I0926 17:53:14.760115    4178 fix.go:229] Guest: 2024-09-26 17:53:14.872717718 -0700 PDT Remote: 2024-09-26 17:53:14.69497 -0700 PDT m=+14.262859348 (delta=177.747718ms)
	I0926 17:53:14.760134    4178 fix.go:200] guest clock delta is within tolerance: 177.747718ms
	I0926 17:53:14.760137    4178 start.go:83] releasing machines lock for "ha-476000", held for 13.881299475s
	I0926 17:53:14.760155    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:14.760297    4178 main.go:141] libmachine: (ha-476000) Calling .GetIP
	I0926 17:53:14.760395    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:14.760729    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:14.760850    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:14.760950    4178 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0926 17:53:14.760987    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:14.761013    4178 ssh_runner.go:195] Run: cat /version.json
	I0926 17:53:14.761025    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:14.761099    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:14.761116    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:14.761194    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:14.761205    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:14.761304    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:14.761313    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:14.761398    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/id_rsa Username:docker}
	I0926 17:53:14.761432    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/id_rsa Username:docker}
	I0926 17:53:14.795855    4178 ssh_runner.go:195] Run: systemctl --version
	I0926 17:53:14.843523    4178 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0926 17:53:14.848548    4178 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0926 17:53:14.848602    4178 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0926 17:53:14.862277    4178 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0926 17:53:14.862289    4178 start.go:495] detecting cgroup driver to use...
	I0926 17:53:14.862388    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 17:53:14.879332    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0926 17:53:14.888407    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0926 17:53:14.897249    4178 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0926 17:53:14.897300    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0926 17:53:14.906191    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 17:53:14.914943    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0926 17:53:14.923611    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 17:53:14.932390    4178 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0926 17:53:14.941382    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0926 17:53:14.950233    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0926 17:53:14.959047    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0926 17:53:14.967887    4178 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0926 17:53:14.975975    4178 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0926 17:53:14.976018    4178 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0926 17:53:14.985185    4178 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0926 17:53:14.993181    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:15.086628    4178 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0926 17:53:15.106310    4178 start.go:495] detecting cgroup driver to use...
	I0926 17:53:15.106396    4178 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0926 17:53:15.118546    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 17:53:15.129665    4178 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0926 17:53:15.143061    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 17:53:15.154154    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 17:53:15.164978    4178 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0926 17:53:15.188125    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 17:53:15.199509    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 17:53:15.214608    4178 ssh_runner.go:195] Run: which cri-dockerd
	I0926 17:53:15.217523    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0926 17:53:15.225391    4178 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0926 17:53:15.238858    4178 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0926 17:53:15.337444    4178 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0926 17:53:15.437802    4178 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0926 17:53:15.437879    4178 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0926 17:53:15.451733    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:15.563208    4178 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0926 17:53:17.891140    4178 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.327906141s)
	I0926 17:53:17.891209    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0926 17:53:17.902729    4178 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0926 17:53:17.915694    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0926 17:53:17.926164    4178 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0926 17:53:18.028587    4178 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0926 17:53:18.135687    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:18.246049    4178 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0926 17:53:18.259788    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0926 17:53:18.270995    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:18.379007    4178 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0926 17:53:18.442458    4178 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0926 17:53:18.442555    4178 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0926 17:53:18.447167    4178 start.go:563] Will wait 60s for crictl version
	I0926 17:53:18.447233    4178 ssh_runner.go:195] Run: which crictl
	I0926 17:53:18.450364    4178 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0926 17:53:18.474973    4178 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I0926 17:53:18.475082    4178 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0926 17:53:18.492744    4178 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0926 17:53:18.534852    4178 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I0926 17:53:18.534897    4178 main.go:141] libmachine: (ha-476000) Calling .GetIP
	I0926 17:53:18.535304    4178 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0926 17:53:18.539884    4178 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 17:53:18.549924    4178 kubeadm.go:883] updating cluster {Name:ha-476000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:ha-476000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0926 17:53:18.550017    4178 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 17:53:18.550087    4178 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0926 17:53:18.562413    4178 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0926 17:53:18.562429    4178 docker.go:615] Images already preloaded, skipping extraction
	I0926 17:53:18.562517    4178 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0926 17:53:18.574107    4178 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0926 17:53:18.574127    4178 cache_images.go:84] Images are preloaded, skipping loading
	I0926 17:53:18.574137    4178 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.1 docker true true} ...
	I0926 17:53:18.574213    4178 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-476000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-476000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0926 17:53:18.574296    4178 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0926 17:53:18.611557    4178 cni.go:84] Creating CNI manager for ""
	I0926 17:53:18.611571    4178 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0926 17:53:18.611586    4178 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0926 17:53:18.611607    4178 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-476000 NodeName:ha-476000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0926 17:53:18.611700    4178 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-476000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0926 17:53:18.611713    4178 kube-vip.go:115] generating kube-vip config ...
	I0926 17:53:18.611769    4178 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0926 17:53:18.624452    4178 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0926 17:53:18.624524    4178 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0926 17:53:18.624583    4178 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0926 17:53:18.632661    4178 binaries.go:44] Found k8s binaries, skipping transfer
	I0926 17:53:18.632722    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0926 17:53:18.640016    4178 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0926 17:53:18.653424    4178 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0926 17:53:18.666861    4178 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0926 17:53:18.680665    4178 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0926 17:53:18.694237    4178 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0926 17:53:18.697273    4178 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 17:53:18.706489    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:18.799127    4178 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 17:53:18.813428    4178 certs.go:68] Setting up /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000 for IP: 192.169.0.5
	I0926 17:53:18.813441    4178 certs.go:194] generating shared ca certs ...
	I0926 17:53:18.813450    4178 certs.go:226] acquiring lock for ca certs: {Name:mk3d4665cb5756ae49ea6f7cd2a58d273aecc507 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:53:18.813627    4178 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.key
	I0926 17:53:18.813697    4178 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.key
	I0926 17:53:18.813709    4178 certs.go:256] generating profile certs ...
	I0926 17:53:18.813816    4178 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/client.key
	I0926 17:53:18.813837    4178 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key.961e0ed9
	I0926 17:53:18.813853    4178 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt.961e0ed9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
	I0926 17:53:19.198737    4178 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt.961e0ed9 ...
	I0926 17:53:19.198759    4178 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt.961e0ed9: {Name:mkf72026f41cf052c5981dfd73bcc3ea46813a38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:53:19.199347    4178 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key.961e0ed9 ...
	I0926 17:53:19.199358    4178 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key.961e0ed9: {Name:mkb6fc9895bd700bb149434e702cedd545112b66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:53:19.199565    4178 certs.go:381] copying /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt.961e0ed9 -> /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt
	I0926 17:53:19.199778    4178 certs.go:385] copying /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key.961e0ed9 -> /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key
	I0926 17:53:19.200020    4178 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.key
	I0926 17:53:19.200030    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0926 17:53:19.200052    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0926 17:53:19.200071    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0926 17:53:19.200089    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0926 17:53:19.200107    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0926 17:53:19.200125    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0926 17:53:19.200142    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0926 17:53:19.200160    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0926 17:53:19.200250    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679.pem (1338 bytes)
	W0926 17:53:19.200297    4178 certs.go:480] ignoring /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679_empty.pem, impossibly tiny 0 bytes
	I0926 17:53:19.200306    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem (1675 bytes)
	I0926 17:53:19.200335    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem (1082 bytes)
	I0926 17:53:19.200365    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem (1123 bytes)
	I0926 17:53:19.200393    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem (1675 bytes)
	I0926 17:53:19.200455    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem (1708 bytes)
	I0926 17:53:19.200488    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679.pem -> /usr/share/ca-certificates/1679.pem
	I0926 17:53:19.200508    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> /usr/share/ca-certificates/16792.pem
	I0926 17:53:19.200526    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:53:19.200943    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0926 17:53:19.229781    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0926 17:53:19.249730    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0926 17:53:19.269922    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0926 17:53:19.290358    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0926 17:53:19.309964    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0926 17:53:19.329782    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0926 17:53:19.349170    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0926 17:53:19.368557    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679.pem --> /usr/share/ca-certificates/1679.pem (1338 bytes)
	I0926 17:53:19.388315    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem --> /usr/share/ca-certificates/16792.pem (1708 bytes)
	I0926 17:53:19.407646    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0926 17:53:19.427156    4178 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0926 17:53:19.441065    4178 ssh_runner.go:195] Run: openssl version
	I0926 17:53:19.445301    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1679.pem && ln -fs /usr/share/ca-certificates/1679.pem /etc/ssl/certs/1679.pem"
	I0926 17:53:19.453728    4178 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1679.pem
	I0926 17:53:19.457317    4178 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:36 /usr/share/ca-certificates/1679.pem
	I0926 17:53:19.457357    4178 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1679.pem
	I0926 17:53:19.461742    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1679.pem /etc/ssl/certs/51391683.0"
	I0926 17:53:19.470198    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16792.pem && ln -fs /usr/share/ca-certificates/16792.pem /etc/ssl/certs/16792.pem"
	I0926 17:53:19.478616    4178 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16792.pem
	I0926 17:53:19.482140    4178 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:36 /usr/share/ca-certificates/16792.pem
	I0926 17:53:19.482201    4178 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16792.pem
	I0926 17:53:19.486473    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16792.pem /etc/ssl/certs/3ec20f2e.0"
	I0926 17:53:19.494777    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0926 17:53:19.503295    4178 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:53:19.506902    4178 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:14 /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:53:19.506943    4178 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:53:19.511360    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0926 17:53:19.519826    4178 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0926 17:53:19.523465    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0926 17:53:19.528006    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0926 17:53:19.532444    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0926 17:53:19.537126    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0926 17:53:19.541512    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0926 17:53:19.545827    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0926 17:53:19.550166    4178 kubeadm.go:392] StartCluster: {Name:ha-476000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-476000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 17:53:19.550298    4178 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0926 17:53:19.561803    4178 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0926 17:53:19.569639    4178 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0926 17:53:19.569650    4178 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0926 17:53:19.569698    4178 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0926 17:53:19.577403    4178 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0926 17:53:19.577718    4178 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-476000" does not appear in /Users/jenkins/minikube-integration/19711-1128/kubeconfig
	I0926 17:53:19.577801    4178 kubeconfig.go:62] /Users/jenkins/minikube-integration/19711-1128/kubeconfig needs updating (will repair): [kubeconfig missing "ha-476000" cluster setting kubeconfig missing "ha-476000" context setting]
	I0926 17:53:19.577967    4178 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1128/kubeconfig: {Name:mkb9c069c578c490d8cb734b05a21821e9215482 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:53:19.578378    4178 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19711-1128/kubeconfig
	I0926 17:53:19.578577    4178 kapi.go:59] client config for ha-476000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/client.key", CAFile:"/Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7459f00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0926 17:53:19.578890    4178 cert_rotation.go:140] Starting client certificate rotation controller
	I0926 17:53:19.579075    4178 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0926 17:53:19.586457    4178 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0926 17:53:19.586468    4178 kubeadm.go:597] duration metric: took 16.814329ms to restartPrimaryControlPlane
	I0926 17:53:19.586474    4178 kubeadm.go:394] duration metric: took 36.313109ms to StartCluster
	I0926 17:53:19.586484    4178 settings.go:142] acquiring lock: {Name:mka8948d0f70add5c5f20f2eca7124a97a496c1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:53:19.586556    4178 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19711-1128/kubeconfig
	I0926 17:53:19.586877    4178 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1128/kubeconfig: {Name:mkb9c069c578c490d8cb734b05a21821e9215482 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:53:19.587096    4178 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 17:53:19.587108    4178 start.go:241] waiting for startup goroutines ...
	I0926 17:53:19.587128    4178 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0926 17:53:19.587252    4178 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:53:19.629430    4178 out.go:177] * Enabled addons: 
	I0926 17:53:19.650423    4178 addons.go:510] duration metric: took 63.269239ms for enable addons: enabled=[]
	I0926 17:53:19.650464    4178 start.go:246] waiting for cluster config update ...
	I0926 17:53:19.650475    4178 start.go:255] writing updated cluster config ...
	I0926 17:53:19.672508    4178 out.go:201] 
	I0926 17:53:19.693989    4178 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:53:19.694118    4178 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/config.json ...
	I0926 17:53:19.716427    4178 out.go:177] * Starting "ha-476000-m02" control-plane node in "ha-476000" cluster
	I0926 17:53:19.758555    4178 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 17:53:19.758588    4178 cache.go:56] Caching tarball of preloaded images
	I0926 17:53:19.758767    4178 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0926 17:53:19.758785    4178 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 17:53:19.758898    4178 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/config.json ...
	I0926 17:53:19.759817    4178 start.go:360] acquireMachinesLock for ha-476000-m02: {Name:mk62b0ec9b6170d1971239573141a625c4a2621e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 17:53:19.759922    4178 start.go:364] duration metric: took 80.364µs to acquireMachinesLock for "ha-476000-m02"
	I0926 17:53:19.759947    4178 start.go:96] Skipping create...Using existing machine configuration
	I0926 17:53:19.759956    4178 fix.go:54] fixHost starting: m02
	I0926 17:53:19.760406    4178 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:53:19.760442    4178 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:53:19.769605    4178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52063
	I0926 17:53:19.770014    4178 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:53:19.770353    4178 main.go:141] libmachine: Using API Version  1
	I0926 17:53:19.770365    4178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:53:19.770608    4178 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:53:19.770743    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	I0926 17:53:19.770835    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetState
	I0926 17:53:19.770922    4178 main.go:141] libmachine: (ha-476000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:53:19.771000    4178 main.go:141] libmachine: (ha-476000-m02) DBG | hyperkit pid from json: 4002
	I0926 17:53:19.771916    4178 main.go:141] libmachine: (ha-476000-m02) DBG | hyperkit pid 4002 missing from process table
	I0926 17:53:19.771940    4178 fix.go:112] recreateIfNeeded on ha-476000-m02: state=Stopped err=<nil>
	I0926 17:53:19.771957    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	W0926 17:53:19.772037    4178 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 17:53:19.814436    4178 out.go:177] * Restarting existing hyperkit VM for "ha-476000-m02" ...
	I0926 17:53:19.835535    4178 main.go:141] libmachine: (ha-476000-m02) Calling .Start
	I0926 17:53:19.835810    4178 main.go:141] libmachine: (ha-476000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:53:19.835874    4178 main.go:141] libmachine: (ha-476000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/hyperkit.pid
	I0926 17:53:19.837665    4178 main.go:141] libmachine: (ha-476000-m02) DBG | hyperkit pid 4002 missing from process table
	I0926 17:53:19.837678    4178 main.go:141] libmachine: (ha-476000-m02) DBG | pid 4002 is in state "Stopped"
	I0926 17:53:19.837694    4178 main.go:141] libmachine: (ha-476000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/hyperkit.pid...
	I0926 17:53:19.838041    4178 main.go:141] libmachine: (ha-476000-m02) DBG | Using UUID 58f499c4-942a-445b-bae0-ab27a7b8106e
	I0926 17:53:19.865707    4178 main.go:141] libmachine: (ha-476000-m02) DBG | Generated MAC 9e:5:36:80:93:e3
	I0926 17:53:19.865728    4178 main.go:141] libmachine: (ha-476000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000
	I0926 17:53:19.865872    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"58f499c4-942a-445b-bae0-ab27a7b8106e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aca80)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0926 17:53:19.865901    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"58f499c4-942a-445b-bae0-ab27a7b8106e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aca80)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0926 17:53:19.865946    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "58f499c4-942a-445b-bae0-ab27a7b8106e", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/ha-476000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/bzimage,/Users/jenkins/minikube-integration/19711-1128/.minikube/machine
s/ha-476000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000"}
	I0926 17:53:19.866020    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 58f499c4-942a-445b-bae0-ab27a7b8106e -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/ha-476000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/bzimage,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000"
	I0926 17:53:19.866041    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0926 17:53:19.867306    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 DEBUG: hyperkit: Pid is 4198
	I0926 17:53:19.867704    4178 main.go:141] libmachine: (ha-476000-m02) DBG | Attempt 0
	I0926 17:53:19.867718    4178 main.go:141] libmachine: (ha-476000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:53:19.867787    4178 main.go:141] libmachine: (ha-476000-m02) DBG | hyperkit pid from json: 4198
	I0926 17:53:19.869727    4178 main.go:141] libmachine: (ha-476000-m02) DBG | Searching for 9e:5:36:80:93:e3 in /var/db/dhcpd_leases ...
	I0926 17:53:19.869759    4178 main.go:141] libmachine: (ha-476000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0926 17:53:19.869772    4178 main.go:141] libmachine: (ha-476000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 17:53:19.869793    4178 main.go:141] libmachine: (ha-476000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 17:53:19.869821    4178 main.go:141] libmachine: (ha-476000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f751f8}
	I0926 17:53:19.869834    4178 main.go:141] libmachine: (ha-476000-m02) DBG | Found match: 9e:5:36:80:93:e3
	I0926 17:53:19.869848    4178 main.go:141] libmachine: (ha-476000-m02) DBG | IP: 192.169.0.6
	I0926 17:53:19.869914    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetConfigRaw
	I0926 17:53:19.870579    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetIP
	I0926 17:53:19.870762    4178 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/config.json ...
	I0926 17:53:19.871158    4178 machine.go:93] provisionDockerMachine start ...
	I0926 17:53:19.871172    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	I0926 17:53:19.871294    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:19.871392    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:19.871530    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:19.871631    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:19.871718    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:19.871893    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:19.872031    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0926 17:53:19.872038    4178 main.go:141] libmachine: About to run SSH command:
	hostname
	I0926 17:53:19.875766    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I0926 17:53:19.884496    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0926 17:53:19.885379    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 17:53:19.885391    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 17:53:19.885398    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 17:53:19.885403    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 17:53:20.270703    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:20 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0926 17:53:20.270718    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:20 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0926 17:53:20.385412    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 17:53:20.385431    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 17:53:20.385441    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 17:53:20.385468    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 17:53:20.386358    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:20 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0926 17:53:20.386369    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:20 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0926 17:53:25.988386    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:25 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0926 17:53:25.988424    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:25 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0926 17:53:25.988435    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:25 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0926 17:53:26.012163    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:26 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0926 17:53:30.140708    4178 main.go:141] libmachine: Error dialing TCP: dial tcp 192.169.0.6:22: connect: connection refused
	I0926 17:53:33.199866    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0926 17:53:33.199881    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetMachineName
	I0926 17:53:33.200004    4178 buildroot.go:166] provisioning hostname "ha-476000-m02"
	I0926 17:53:33.200013    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetMachineName
	I0926 17:53:33.200123    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:33.200213    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:33.200322    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.200426    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.200540    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:33.200702    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:33.200858    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0926 17:53:33.200867    4178 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-476000-m02 && echo "ha-476000-m02" | sudo tee /etc/hostname
	I0926 17:53:33.269037    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-476000-m02
	
	I0926 17:53:33.269056    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:33.269193    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:33.269285    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.269368    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.269450    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:33.269573    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:33.269735    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0926 17:53:33.269746    4178 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-476000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-476000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-476000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0926 17:53:33.331289    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 17:53:33.331305    4178 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19711-1128/.minikube CaCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19711-1128/.minikube}
	I0926 17:53:33.331314    4178 buildroot.go:174] setting up certificates
	I0926 17:53:33.331321    4178 provision.go:84] configureAuth start
	I0926 17:53:33.331328    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetMachineName
	I0926 17:53:33.331463    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetIP
	I0926 17:53:33.331556    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:33.331643    4178 provision.go:143] copyHostCerts
	I0926 17:53:33.331674    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem
	I0926 17:53:33.331734    4178 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem, removing ...
	I0926 17:53:33.331740    4178 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem
	I0926 17:53:33.331856    4178 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem (1082 bytes)
	I0926 17:53:33.332044    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem
	I0926 17:53:33.332093    4178 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem, removing ...
	I0926 17:53:33.332098    4178 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem
	I0926 17:53:33.332176    4178 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem (1123 bytes)
	I0926 17:53:33.332314    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem
	I0926 17:53:33.332352    4178 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem, removing ...
	I0926 17:53:33.332356    4178 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem
	I0926 17:53:33.332427    4178 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem (1675 bytes)
	I0926 17:53:33.332570    4178 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem org=jenkins.ha-476000-m02 san=[127.0.0.1 192.169.0.6 ha-476000-m02 localhost minikube]
	I0926 17:53:33.395607    4178 provision.go:177] copyRemoteCerts
	I0926 17:53:33.395696    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0926 17:53:33.395715    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:33.395906    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:33.396015    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.396100    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:33.396196    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/id_rsa Username:docker}
	I0926 17:53:33.431740    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0926 17:53:33.431806    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0926 17:53:33.452053    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0926 17:53:33.452106    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0926 17:53:33.471760    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0926 17:53:33.471825    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0926 17:53:33.490896    4178 provision.go:87] duration metric: took 159.567474ms to configureAuth
	I0926 17:53:33.490910    4178 buildroot.go:189] setting minikube options for container-runtime
	I0926 17:53:33.491086    4178 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:53:33.491099    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	I0926 17:53:33.491231    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:33.491321    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:33.491413    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.491498    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.491591    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:33.491713    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:33.491847    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0926 17:53:33.491854    4178 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0926 17:53:33.547403    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0926 17:53:33.547417    4178 buildroot.go:70] root file system type: tmpfs
	I0926 17:53:33.547504    4178 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0926 17:53:33.547518    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:33.547665    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:33.547775    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.547896    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.547997    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:33.548125    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:33.548268    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0926 17:53:33.548312    4178 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0926 17:53:33.613348    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0926 17:53:33.613367    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:33.613495    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:33.613582    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.613661    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.613747    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:33.613879    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:33.614018    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0926 17:53:33.614033    4178 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0926 17:53:35.261247    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0926 17:53:35.261262    4178 machine.go:96] duration metric: took 15.390039559s to provisionDockerMachine
	I0926 17:53:35.261270    4178 start.go:293] postStartSetup for "ha-476000-m02" (driver="hyperkit")
	I0926 17:53:35.261294    4178 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0926 17:53:35.261308    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	I0926 17:53:35.261509    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0926 17:53:35.261522    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:35.261612    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:35.261704    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:35.261809    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:35.261922    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/id_rsa Username:docker}
	I0926 17:53:35.302268    4178 ssh_runner.go:195] Run: cat /etc/os-release
	I0926 17:53:35.305656    4178 info.go:137] Remote host: Buildroot 2023.02.9
	I0926 17:53:35.305666    4178 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1128/.minikube/addons for local assets ...
	I0926 17:53:35.305765    4178 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1128/.minikube/files for local assets ...
	I0926 17:53:35.305947    4178 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> 16792.pem in /etc/ssl/certs
	I0926 17:53:35.305953    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> /etc/ssl/certs/16792.pem
	I0926 17:53:35.306171    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0926 17:53:35.314020    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem --> /etc/ssl/certs/16792.pem (1708 bytes)
	I0926 17:53:35.344643    4178 start.go:296] duration metric: took 83.349532ms for postStartSetup
	I0926 17:53:35.344681    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	I0926 17:53:35.344863    4178 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0926 17:53:35.344877    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:35.344965    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:35.345056    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:35.345137    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:35.345223    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/id_rsa Username:docker}
	I0926 17:53:35.381164    4178 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0926 17:53:35.381229    4178 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0926 17:53:35.414571    4178 fix.go:56] duration metric: took 15.654555871s for fixHost
	I0926 17:53:35.414597    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:35.414747    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:35.414839    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:35.414932    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:35.415022    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:35.415156    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:35.415295    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0926 17:53:35.415302    4178 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0926 17:53:35.472100    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727398415.586409353
	
	I0926 17:53:35.472129    4178 fix.go:216] guest clock: 1727398415.586409353
	I0926 17:53:35.472134    4178 fix.go:229] Guest: 2024-09-26 17:53:35.586409353 -0700 PDT Remote: 2024-09-26 17:53:35.414586 -0700 PDT m=+34.982399519 (delta=171.823353ms)
	I0926 17:53:35.472150    4178 fix.go:200] guest clock delta is within tolerance: 171.823353ms
	I0926 17:53:35.472153    4178 start.go:83] releasing machines lock for "ha-476000-m02", held for 15.712162695s
	I0926 17:53:35.472170    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	I0926 17:53:35.472305    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetIP
	I0926 17:53:35.513568    4178 out.go:177] * Found network options:
	I0926 17:53:35.535552    4178 out.go:177]   - NO_PROXY=192.169.0.5
	W0926 17:53:35.557416    4178 proxy.go:119] fail to check proxy env: Error ip not in block
	I0926 17:53:35.557455    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	I0926 17:53:35.558341    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	I0926 17:53:35.558579    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	I0926 17:53:35.558709    4178 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0926 17:53:35.558764    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	W0926 17:53:35.558835    4178 proxy.go:119] fail to check proxy env: Error ip not in block
	I0926 17:53:35.558964    4178 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0926 17:53:35.558985    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:35.559000    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:35.559215    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:35.559232    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:35.559433    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:35.559464    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:35.559662    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:35.559681    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/id_rsa Username:docker}
	I0926 17:53:35.559790    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/id_rsa Username:docker}
	W0926 17:53:35.596059    4178 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0926 17:53:35.596139    4178 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0926 17:53:35.610162    4178 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0926 17:53:35.610178    4178 start.go:495] detecting cgroup driver to use...
	I0926 17:53:35.610237    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 17:53:35.646709    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0926 17:53:35.656640    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0926 17:53:35.665578    4178 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0926 17:53:35.665623    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0926 17:53:35.674574    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 17:53:35.683489    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0926 17:53:35.692471    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 17:53:35.701275    4178 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0926 17:53:35.710401    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0926 17:53:35.719421    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0926 17:53:35.728448    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0926 17:53:35.738067    4178 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0926 17:53:35.746743    4178 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0926 17:53:35.746802    4178 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0926 17:53:35.755939    4178 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0926 17:53:35.763977    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:35.862563    4178 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0926 17:53:35.881531    4178 start.go:495] detecting cgroup driver to use...
	I0926 17:53:35.881616    4178 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0926 17:53:35.899471    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 17:53:35.910823    4178 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0926 17:53:35.923558    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 17:53:35.935946    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 17:53:35.946007    4178 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0926 17:53:35.969898    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 17:53:35.980115    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 17:53:35.995271    4178 ssh_runner.go:195] Run: which cri-dockerd
	I0926 17:53:35.998508    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0926 17:53:36.005810    4178 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0926 17:53:36.019492    4178 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0926 17:53:36.116976    4178 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0926 17:53:36.228090    4178 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0926 17:53:36.228117    4178 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0926 17:53:36.242164    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:36.335597    4178 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0926 17:53:38.678847    4178 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.343223137s)
	I0926 17:53:38.678917    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0926 17:53:38.689531    4178 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0926 17:53:38.702816    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0926 17:53:38.713151    4178 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0926 17:53:38.819068    4178 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0926 17:53:38.926667    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:39.040074    4178 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0926 17:53:39.054197    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0926 17:53:39.065256    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:39.163219    4178 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0926 17:53:39.228416    4178 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0926 17:53:39.228518    4178 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0926 17:53:39.233191    4178 start.go:563] Will wait 60s for crictl version
	I0926 17:53:39.233249    4178 ssh_runner.go:195] Run: which crictl
	I0926 17:53:39.236580    4178 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0926 17:53:39.262407    4178 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I0926 17:53:39.262495    4178 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0926 17:53:39.279010    4178 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0926 17:53:39.317905    4178 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I0926 17:53:39.359545    4178 out.go:177]   - env NO_PROXY=192.169.0.5
	I0926 17:53:39.381103    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetIP
	I0926 17:53:39.381320    4178 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0926 17:53:39.384579    4178 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 17:53:39.394395    4178 mustload.go:65] Loading cluster: ha-476000
	I0926 17:53:39.394560    4178 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:53:39.394810    4178 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:53:39.394834    4178 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:53:39.403482    4178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52086
	I0926 17:53:39.403823    4178 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:53:39.404150    4178 main.go:141] libmachine: Using API Version  1
	I0926 17:53:39.404164    4178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:53:39.404434    4178 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:53:39.404542    4178 main.go:141] libmachine: (ha-476000) Calling .GetState
	I0926 17:53:39.404632    4178 main.go:141] libmachine: (ha-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:53:39.404706    4178 main.go:141] libmachine: (ha-476000) DBG | hyperkit pid from json: 4191
	I0926 17:53:39.405678    4178 host.go:66] Checking if "ha-476000" exists ...
	I0926 17:53:39.405956    4178 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:53:39.405986    4178 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:53:39.414686    4178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52088
	I0926 17:53:39.415056    4178 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:53:39.415379    4178 main.go:141] libmachine: Using API Version  1
	I0926 17:53:39.415388    4178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:53:39.415605    4178 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:53:39.415728    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:39.415830    4178 certs.go:68] Setting up /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000 for IP: 192.169.0.6
	I0926 17:53:39.415836    4178 certs.go:194] generating shared ca certs ...
	I0926 17:53:39.415849    4178 certs.go:226] acquiring lock for ca certs: {Name:mk3d4665cb5756ae49ea6f7cd2a58d273aecc507 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:53:39.416032    4178 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.key
	I0926 17:53:39.416108    4178 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.key
	I0926 17:53:39.416119    4178 certs.go:256] generating profile certs ...
	I0926 17:53:39.416243    4178 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/client.key
	I0926 17:53:39.416331    4178 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key.462632c0
	I0926 17:53:39.416399    4178 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.key
	I0926 17:53:39.416406    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0926 17:53:39.416427    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0926 17:53:39.416446    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0926 17:53:39.416465    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0926 17:53:39.416482    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0926 17:53:39.416510    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0926 17:53:39.416544    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0926 17:53:39.416564    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0926 17:53:39.416666    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679.pem (1338 bytes)
	W0926 17:53:39.416716    4178 certs.go:480] ignoring /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679_empty.pem, impossibly tiny 0 bytes
	I0926 17:53:39.416725    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem (1675 bytes)
	I0926 17:53:39.416762    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem (1082 bytes)
	I0926 17:53:39.416795    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem (1123 bytes)
	I0926 17:53:39.416828    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem (1675 bytes)
	I0926 17:53:39.416893    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem (1708 bytes)
	I0926 17:53:39.416929    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:53:39.416949    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679.pem -> /usr/share/ca-certificates/1679.pem
	I0926 17:53:39.416967    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> /usr/share/ca-certificates/16792.pem
	I0926 17:53:39.416991    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:39.417078    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:39.417153    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:39.417237    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:39.417320    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/id_rsa Username:docker}
	I0926 17:53:39.447975    4178 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0926 17:53:39.451073    4178 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0926 17:53:39.458912    4178 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0926 17:53:39.462003    4178 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0926 17:53:39.470783    4178 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0926 17:53:39.473836    4178 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0926 17:53:39.481537    4178 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0926 17:53:39.484645    4178 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0926 17:53:39.492945    4178 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0926 17:53:39.495978    4178 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0926 17:53:39.503610    4178 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0926 17:53:39.506808    4178 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0926 17:53:39.514787    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0926 17:53:39.534891    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0926 17:53:39.554745    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0926 17:53:39.574668    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0926 17:53:39.594523    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0926 17:53:39.614131    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0926 17:53:39.633606    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0926 17:53:39.653376    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0926 17:53:39.673369    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0926 17:53:39.692952    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679.pem --> /usr/share/ca-certificates/1679.pem (1338 bytes)
	I0926 17:53:39.712634    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem --> /usr/share/ca-certificates/16792.pem (1708 bytes)
	I0926 17:53:39.732005    4178 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0926 17:53:39.745464    4178 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0926 17:53:39.759232    4178 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0926 17:53:39.772911    4178 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0926 17:53:39.786441    4178 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0926 17:53:39.800266    4178 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0926 17:53:39.813927    4178 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0926 17:53:39.827332    4178 ssh_runner.go:195] Run: openssl version
	I0926 17:53:39.831566    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16792.pem && ln -fs /usr/share/ca-certificates/16792.pem /etc/ssl/certs/16792.pem"
	I0926 17:53:39.839850    4178 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16792.pem
	I0926 17:53:39.843163    4178 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:36 /usr/share/ca-certificates/16792.pem
	I0926 17:53:39.843206    4178 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16792.pem
	I0926 17:53:39.847374    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16792.pem /etc/ssl/certs/3ec20f2e.0"
	I0926 17:53:39.855624    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0926 17:53:39.863965    4178 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:53:39.867400    4178 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:14 /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:53:39.867452    4178 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:53:39.871715    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0926 17:53:39.879907    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1679.pem && ln -fs /usr/share/ca-certificates/1679.pem /etc/ssl/certs/1679.pem"
	I0926 17:53:39.888247    4178 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1679.pem
	I0926 17:53:39.891606    4178 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:36 /usr/share/ca-certificates/1679.pem
	I0926 17:53:39.891654    4178 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1679.pem
	I0926 17:53:39.895855    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1679.pem /etc/ssl/certs/51391683.0"
	I0926 17:53:39.904043    4178 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0926 17:53:39.907450    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0926 17:53:39.911778    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0926 17:53:39.915909    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0926 17:53:39.920037    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0926 17:53:39.924167    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0926 17:53:39.928372    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0926 17:53:39.932543    4178 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.1 docker true true} ...
	I0926 17:53:39.932604    4178 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-476000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-476000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0926 17:53:39.932624    4178 kube-vip.go:115] generating kube-vip config ...
	I0926 17:53:39.932670    4178 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0926 17:53:39.944715    4178 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0926 17:53:39.944753    4178 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0926 17:53:39.944822    4178 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0926 17:53:39.953541    4178 binaries.go:44] Found k8s binaries, skipping transfer
	I0926 17:53:39.953597    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0926 17:53:39.961618    4178 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0926 17:53:39.975007    4178 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0926 17:53:39.988472    4178 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0926 17:53:40.002021    4178 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0926 17:53:40.004933    4178 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 17:53:40.015059    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:40.118867    4178 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 17:53:40.133377    4178 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 17:53:40.133568    4178 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:53:40.154757    4178 out.go:177] * Verifying Kubernetes components...
	I0926 17:53:40.196346    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:40.323445    4178 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 17:53:40.338817    4178 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19711-1128/kubeconfig
	I0926 17:53:40.339037    4178 kapi.go:59] client config for ha-476000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/client.key", CAFile:"/Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7459f00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0926 17:53:40.339084    4178 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0926 17:53:40.339280    4178 node_ready.go:35] waiting up to 6m0s for node "ha-476000-m02" to be "Ready" ...
	I0926 17:53:40.339354    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:40.339359    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:40.339366    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:40.339369    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:47.201921    4178 round_trippers.go:574] Response Status:  in 6862 milliseconds
	I0926 17:53:48.202681    4178 with_retry.go:234] Got a Retry-After 1s response for attempt 1 to https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:48.202709    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:48.202713    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:48.202720    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:48.202724    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:49.203128    4178 round_trippers.go:574] Response Status:  in 1000 milliseconds
	I0926 17:53:49.203194    4178 node_ready.go:53] error getting node "ha-476000-m02": Get "https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.1:52091->192.169.0.5:8443: read: connection reset by peer
	I0926 17:53:49.203240    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:49.203247    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:49.203252    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:49.203256    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:50.204478    4178 round_trippers.go:574] Response Status:  in 1001 milliseconds
	I0926 17:53:50.204619    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:50.204631    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:50.204642    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:50.204649    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:51.204974    4178 round_trippers.go:574] Response Status:  in 1000 milliseconds
	I0926 17:53:51.205045    4178 node_ready.go:53] error getting node "ha-476000-m02": Get "https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02": dial tcp 192.169.0.5:8443: connect: connection refused
	I0926 17:53:51.205098    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:51.205108    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:51.205118    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:51.205124    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:52.205352    4178 round_trippers.go:574] Response Status:  in 1000 milliseconds
	I0926 17:53:52.205474    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:52.205485    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:52.205496    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:52.205505    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:53.206703    4178 round_trippers.go:574] Response Status:  in 1001 milliseconds
	I0926 17:53:53.206766    4178 node_ready.go:53] error getting node "ha-476000-m02": Get "https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02": dial tcp 192.169.0.5:8443: connect: connection refused
	I0926 17:53:53.206822    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:53.206831    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:53.206843    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:53.206849    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:54.208032    4178 round_trippers.go:574] Response Status:  in 1001 milliseconds
	I0926 17:53:54.208160    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:54.208172    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:54.208183    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:54.208190    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:55.208420    4178 round_trippers.go:574] Response Status:  in 1000 milliseconds
	I0926 17:53:55.208484    4178 node_ready.go:53] error getting node "ha-476000-m02": Get "https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02": dial tcp 192.169.0.5:8443: connect: connection refused
	I0926 17:53:55.208561    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:55.208572    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:55.208582    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:55.208586    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:56.209388    4178 round_trippers.go:574] Response Status:  in 1000 milliseconds
	I0926 17:53:56.209496    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:56.209507    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:56.209517    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:56.209529    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:57.211492    4178 round_trippers.go:574] Response Status:  in 1001 milliseconds
	I0926 17:53:57.211560    4178 node_ready.go:53] error getting node "ha-476000-m02": Get "https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02": dial tcp 192.169.0.5:8443: connect: connection refused
	I0926 17:53:57.211643    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:57.211654    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:57.211665    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:57.211671    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:58.213441    4178 round_trippers.go:574] Response Status:  in 1001 milliseconds
	I0926 17:53:58.213520    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:58.213528    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:58.213535    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:58.213538    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:59.215627    4178 round_trippers.go:574] Response Status:  in 1002 milliseconds
	I0926 17:53:59.215689    4178 node_ready.go:53] error getting node "ha-476000-m02": Get "https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02": dial tcp 192.169.0.5:8443: connect: connection refused
	I0926 17:53:59.215761    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:59.215770    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:59.215781    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:59.215792    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:00.214970    4178 round_trippers.go:574] Response Status:  in 999 milliseconds
	I0926 17:54:00.215057    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:00.215066    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:00.215072    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:00.215075    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:02.766651    4178 round_trippers.go:574] Response Status: 200 OK in 2551 milliseconds
	I0926 17:54:02.767320    4178 node_ready.go:53] node "ha-476000-m02" has status "Ready":"False"
	I0926 17:54:02.767364    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:02.767371    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:02.767378    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:02.767382    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:02.808455    4178 round_trippers.go:574] Response Status: 200 OK in 41 milliseconds
	I0926 17:54:02.839499    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:02.839515    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:02.839522    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:02.839524    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:02.844502    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:03.339950    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:03.339974    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:03.340014    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:03.340033    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:03.343931    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:03.839836    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:03.839849    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:03.839855    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:03.839859    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:03.842811    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:04.340378    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:04.340403    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:04.340414    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:04.340421    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:04.344418    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:04.839736    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:04.839752    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:04.839758    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:04.839762    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:04.842629    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:04.843116    4178 node_ready.go:49] node "ha-476000-m02" has status "Ready":"True"
	I0926 17:54:04.843129    4178 node_ready.go:38] duration metric: took 24.503742617s for node "ha-476000-m02" to be "Ready" ...
	I0926 17:54:04.843136    4178 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0926 17:54:04.843170    4178 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0926 17:54:04.843178    4178 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0926 17:54:04.843227    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0926 17:54:04.843232    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:04.843238    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:04.843242    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:04.851447    4178 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0926 17:54:04.858185    4178 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:04.858238    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:04.858243    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:04.858250    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:04.858254    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:04.860121    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:04.860597    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:04.860608    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:04.860614    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:04.860619    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:04.862704    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:05.358322    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:05.358334    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:05.358341    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:05.358344    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:05.361386    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:05.361939    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:05.361947    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:05.361954    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:05.361958    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:05.366335    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:05.858443    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:05.858462    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:05.858485    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:05.858489    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:05.861181    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:05.861691    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:05.861698    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:05.861704    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:05.861706    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:05.863911    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:06.359311    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:06.359342    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:06.359350    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:06.359354    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:06.362329    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:06.362841    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:06.362848    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:06.362854    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:06.362864    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:06.365951    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:06.860115    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:06.860140    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:06.860152    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:06.860192    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:06.863829    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:06.864356    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:06.864364    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:06.864370    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:06.864372    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:06.866293    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:06.866641    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:07.359755    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:07.359781    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:07.359791    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:07.359796    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:07.362929    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:07.363432    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:07.363440    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:07.363449    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:07.363454    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:07.365354    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:07.859403    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:07.859428    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:07.859440    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:07.859447    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:07.863936    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:07.864482    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:07.864489    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:07.864494    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:07.864497    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:07.866695    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:08.359070    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:08.359095    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:08.359104    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:08.359110    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:08.363413    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:08.363975    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:08.363983    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:08.363989    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:08.363996    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:08.366160    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:08.858562    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:08.858596    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:08.858604    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:08.858608    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:08.861584    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:08.862306    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:08.862313    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:08.862319    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:08.862329    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:08.864555    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:09.359666    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:09.359694    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:09.359706    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:09.359710    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:09.364444    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:09.364796    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:09.364802    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:09.364808    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:09.364812    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:09.367017    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:09.367391    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:09.859578    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:09.859628    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:09.859645    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:09.859654    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:09.863289    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:09.863926    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:09.863934    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:09.863940    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:09.863942    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:09.865998    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:10.358368    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:10.358385    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:10.358391    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:10.358396    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:10.366195    4178 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0926 17:54:10.366734    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:10.366743    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:10.366752    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:10.366755    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:10.369544    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:10.859656    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:10.859683    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:10.859694    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:10.859701    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:10.864043    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:10.864491    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:10.864499    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:10.864504    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:10.864508    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:10.866558    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:11.360000    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:11.360026    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:11.360038    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:11.360045    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:11.364064    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:11.364604    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:11.364611    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:11.364617    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:11.364620    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:11.366561    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:11.859988    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:11.860011    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:11.860023    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:11.860028    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:11.863780    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:11.864488    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:11.864496    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:11.864502    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:11.864505    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:11.866527    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:11.866879    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:12.359231    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:12.359302    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:12.359317    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:12.359325    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:12.363142    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:12.363807    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:12.363815    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:12.363820    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:12.363823    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:12.365720    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:12.859295    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:12.859321    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:12.859332    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:12.859336    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:12.863604    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:12.864232    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:12.864243    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:12.864249    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:12.864252    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:12.866340    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:13.360473    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:13.360500    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:13.360511    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:13.360516    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:13.364925    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:13.365659    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:13.365667    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:13.365672    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:13.365677    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:13.367805    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:13.858451    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:13.858477    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:13.858490    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:13.858495    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:13.862381    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:13.862921    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:13.862929    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:13.862934    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:13.862938    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:13.864941    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:14.358942    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:14.358966    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:14.359005    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:14.359013    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:14.365723    4178 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0926 17:54:14.366181    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:14.366189    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:14.366193    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:14.366197    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:14.368552    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:14.368954    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:14.860475    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:14.860501    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:14.860543    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:14.860550    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:14.864207    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:14.864620    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:14.864628    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:14.864634    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:14.864637    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:14.866896    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:15.358734    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:15.358751    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:15.358757    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:15.358761    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:15.361477    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:15.362047    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:15.362056    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:15.362062    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:15.362072    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:15.364404    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:15.859641    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:15.859669    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:15.859681    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:15.859690    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:15.864301    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:15.864755    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:15.864762    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:15.864767    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:15.864771    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:15.866941    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:16.358689    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:16.358713    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:16.358762    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:16.358771    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:16.363038    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:16.363637    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:16.363644    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:16.363649    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:16.363665    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:16.365580    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:16.858829    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:16.858848    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:16.858857    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:16.858864    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:16.861418    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:16.861895    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:16.861903    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:16.861908    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:16.861913    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:16.864330    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:16.864660    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:17.358538    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:17.358557    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:17.358576    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:17.358581    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:17.361634    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:17.362216    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:17.362224    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:17.362230    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:17.362235    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:17.364368    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:17.858951    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:17.859025    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:17.859068    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:17.859083    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:17.863132    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:17.863643    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:17.863651    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:17.863660    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:17.863665    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:17.865816    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:18.358377    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:18.358396    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:18.358403    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:18.358429    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:18.364859    4178 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0926 17:54:18.365288    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:18.365296    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:18.365303    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:18.365306    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:18.367423    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:18.859211    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:18.859237    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:18.859250    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:18.859257    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:18.863321    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:18.863832    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:18.863840    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:18.863846    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:18.863849    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:18.865860    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:18.866261    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:19.358438    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:19.358453    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:19.358460    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:19.358463    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:19.361068    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:19.361685    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:19.361694    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:19.361700    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:19.361703    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:19.364079    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:19.859935    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:19.859961    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:19.859972    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:19.859979    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:19.864189    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:19.864623    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:19.864630    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:19.864638    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:19.864641    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:19.866680    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:20.359100    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:20.359154    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:20.359164    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:20.359169    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:20.362081    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:20.362587    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:20.362595    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:20.362601    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:20.362604    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:20.364581    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:20.860535    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:20.860561    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:20.860573    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:20.860581    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:20.864595    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:20.865051    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:20.865063    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:20.865070    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:20.865074    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:20.866939    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:20.867377    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:21.358839    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:21.358864    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:21.358910    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:21.358919    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:21.362304    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:21.362899    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:21.362907    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:21.362913    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:21.362923    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:21.364904    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:21.859198    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:21.859224    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:21.859235    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:21.859244    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:21.863464    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:21.863902    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:21.863911    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:21.863916    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:21.863920    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:21.866008    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:22.358500    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:22.358557    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:22.358567    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:22.358581    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:22.363039    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:22.363487    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:22.363495    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:22.363501    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:22.363504    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:22.365560    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:22.860486    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:22.860511    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:22.860523    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:22.860549    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:22.865059    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:22.865691    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:22.865699    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:22.865705    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:22.865708    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:22.867780    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:22.868136    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:23.358997    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:23.359023    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:23.359035    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:23.359043    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:23.363268    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:23.363930    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:23.363938    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:23.363944    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:23.363948    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:23.365982    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:23.858407    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:23.858421    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:23.858452    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:23.858457    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:23.861385    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:23.861801    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:23.861812    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:23.861818    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:23.861823    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:23.864061    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:24.360526    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:24.360553    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:24.360565    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:24.360571    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:24.364721    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:24.365349    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:24.365356    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:24.365362    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:24.365365    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:24.367430    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:24.858605    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:24.858630    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:24.858641    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:24.858648    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:24.862472    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:24.863003    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:24.863010    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:24.863016    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:24.863018    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:24.864908    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:25.358639    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:25.358664    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:25.358677    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:25.358684    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:25.362945    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:25.363487    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:25.363495    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:25.363501    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:25.363503    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:25.365691    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:25.366062    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:25.859315    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:25.859333    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:25.859341    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:25.859364    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:25.862801    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:25.863276    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:25.863284    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:25.863289    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:25.863293    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:25.865685    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:26.359001    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:26.359015    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:26.359021    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:26.359025    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:26.361573    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:26.362094    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:26.362101    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:26.362107    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:26.362111    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:26.364144    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:26.858599    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:26.858625    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:26.858637    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:26.858644    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:26.862247    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:26.862753    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:26.862762    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:26.862767    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:26.862771    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:26.864571    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:27.358862    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:27.358888    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:27.358899    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:27.358904    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:27.363109    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:27.363648    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:27.363657    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:27.363663    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:27.363669    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:27.365500    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:27.859752    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:27.859779    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:27.859790    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:27.859795    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:27.864255    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:27.864725    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:27.864733    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:27.864738    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:27.864741    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:27.866764    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:27.867055    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:28.359808    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:28.359835    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:28.359882    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:28.359890    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:28.363146    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:28.363572    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:28.363579    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:28.363585    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:28.363589    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:28.365498    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:28.858708    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:28.858734    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:28.858746    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:28.858752    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:28.862673    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:28.863231    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:28.863238    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:28.863244    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:28.863248    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:28.865181    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:29.359611    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:29.359640    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:29.359653    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:29.359660    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:29.362965    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:29.363411    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:29.363419    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:29.363425    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:29.363427    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:29.365174    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:29.859384    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:29.859402    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:29.859409    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:29.859414    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:29.862499    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:29.863033    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:29.863041    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:29.863047    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:29.863050    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:29.865154    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:30.359191    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:30.359209    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:30.359255    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:30.359265    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:30.361836    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:30.362303    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:30.362312    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:30.362317    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:30.362320    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:30.364567    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:30.364980    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:30.860033    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:30.860066    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:30.860101    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:30.860109    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:30.864359    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:30.864782    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:30.864790    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:30.864799    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:30.864805    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:30.866798    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:31.358678    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:31.358711    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:31.358762    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:31.358772    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:31.363329    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:31.363731    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:31.363739    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:31.363745    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:31.363751    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:31.365894    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:31.858683    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:31.858706    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:31.858718    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:31.858724    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:31.862717    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:31.863254    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:31.863262    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:31.863268    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:31.863272    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:31.865220    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:32.359370    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:32.359420    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:32.359434    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:32.359442    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:32.362904    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:32.363502    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:32.363510    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:32.363516    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:32.363518    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:32.365729    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:32.366016    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:32.859955    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:32.859990    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:32.859997    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:32.860001    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:32.874510    4178 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0926 17:54:32.875130    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:32.875137    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:32.875142    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:32.875145    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:32.883403    4178 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0926 17:54:33.359964    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:33.360006    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:33.360019    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:33.360025    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:33.362527    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:33.362934    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:33.362942    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:33.362948    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:33.362953    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:33.365277    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:33.860043    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:33.860070    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:33.860082    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:33.860089    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:33.864487    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:33.864960    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:33.864968    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:33.864974    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:33.864978    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:33.866813    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:34.359408    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:34.359422    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:34.359453    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:34.359457    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:34.361843    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:34.362407    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:34.362415    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:34.362419    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:34.362427    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:34.364587    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:34.859087    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:34.859113    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:34.859124    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:34.859132    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:34.863123    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:34.863508    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:34.863516    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:34.863522    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:34.863525    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:34.865516    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:34.865853    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:35.359972    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:35.359997    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:35.360039    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:35.360048    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:35.364311    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:35.364957    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:35.364964    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:35.364970    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:35.364974    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:35.367232    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:35.859251    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:35.859265    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:35.859271    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:35.859275    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:35.861746    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:35.862292    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:35.862304    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:35.862318    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:35.862323    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:35.864289    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:36.360234    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:36.360274    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:36.360284    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:36.360291    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:36.363297    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:36.363726    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:36.363734    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:36.363740    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:36.363743    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:36.365689    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:36.859037    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:36.859105    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:36.859119    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:36.859130    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:36.863205    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:36.863621    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:36.863629    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:36.863635    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:36.863638    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:36.865642    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:36.865933    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:37.359101    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:37.359127    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:37.359139    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:37.359145    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:37.363256    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:37.363851    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:37.363859    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:37.363865    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:37.363868    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:37.365908    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:37.859282    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:37.859308    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:37.859319    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:37.859325    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:37.863341    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:37.863718    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:37.863726    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:37.863731    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:37.863735    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:37.865672    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:38.359013    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:38.359055    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:38.359065    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:38.359070    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:38.361936    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:38.362521    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:38.362529    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:38.362534    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:38.362538    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:38.364699    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:38.859426    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:38.859453    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:38.859466    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:38.859475    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:38.863509    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:38.864012    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:38.864020    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:38.864025    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:38.864029    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:38.866259    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:38.866728    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:39.358730    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:39.358748    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:39.358756    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:39.358765    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:39.362410    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:39.362956    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:39.362964    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:39.362970    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:39.362979    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:39.365004    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:39.858564    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:39.858584    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:39.858592    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:39.858598    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:39.861794    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:39.862200    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:39.862208    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:39.862214    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:39.862219    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:39.864175    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:40.358549    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:40.358586    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.358596    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.358600    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.361533    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:40.362003    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:40.362011    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.362017    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.362020    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.364141    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:40.860048    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:40.860077    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.860087    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.860093    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.863900    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:40.864305    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:40.864314    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.864320    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.864322    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.866266    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:40.866599    4178 pod_ready.go:93] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:40.866610    4178 pod_ready.go:82] duration metric: took 36.008276067s for pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:40.866616    4178 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7jwgv" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:40.866646    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-7jwgv
	I0926 17:54:40.866651    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.866657    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.866661    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.868466    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:40.868930    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:40.868938    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.868944    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.868948    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.870736    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:40.871103    4178 pod_ready.go:93] pod "coredns-7c65d6cfc9-7jwgv" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:40.871111    4178 pod_ready.go:82] duration metric: took 4.489575ms for pod "coredns-7c65d6cfc9-7jwgv" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:40.871118    4178 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-476000" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:40.871146    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-476000
	I0926 17:54:40.871150    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.871156    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.871160    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.873206    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:40.873700    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:40.873707    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.873713    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.873717    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.875461    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:40.875829    4178 pod_ready.go:93] pod "etcd-ha-476000" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:40.875837    4178 pod_ready.go:82] duration metric: took 4.713943ms for pod "etcd-ha-476000" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:40.875844    4178 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-476000-m02" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:40.875875    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-476000-m02
	I0926 17:54:40.875880    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.875885    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.875890    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.877741    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:40.878137    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:40.878145    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.878151    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.878155    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.880023    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:40.880375    4178 pod_ready.go:93] pod "etcd-ha-476000-m02" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:40.880384    4178 pod_ready.go:82] duration metric: took 4.534554ms for pod "etcd-ha-476000-m02" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:40.880390    4178 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-476000-m03" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:40.880419    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-476000-m03
	I0926 17:54:40.880424    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.880429    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.880433    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.882094    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:40.882474    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m03
	I0926 17:54:40.882481    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.882486    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.882496    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.884251    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:40.884613    4178 pod_ready.go:98] node "ha-476000-m03" hosting pod "etcd-ha-476000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:40.884622    4178 pod_ready.go:82] duration metric: took 4.227661ms for pod "etcd-ha-476000-m03" in "kube-system" namespace to be "Ready" ...
	E0926 17:54:40.884628    4178 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-476000-m03" hosting pod "etcd-ha-476000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:40.884638    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-476000" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:41.061560    4178 request.go:632] Waited for 176.87189ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-476000
	I0926 17:54:41.061616    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-476000
	I0926 17:54:41.061655    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:41.061670    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:41.061677    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:41.065303    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:41.262138    4178 request.go:632] Waited for 196.341694ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:41.262261    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:41.262270    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:41.262282    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:41.262290    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:41.266333    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:41.266689    4178 pod_ready.go:93] pod "kube-apiserver-ha-476000" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:41.266699    4178 pod_ready.go:82] duration metric: took 382.053003ms for pod "kube-apiserver-ha-476000" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:41.266705    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-476000-m02" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:41.460472    4178 request.go:632] Waited for 193.723597ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-476000-m02
	I0926 17:54:41.460525    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-476000-m02
	I0926 17:54:41.460535    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:41.460578    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:41.460588    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:41.464471    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:41.661359    4178 request.go:632] Waited for 196.505849ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:41.661462    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:41.661475    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:41.661486    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:41.661494    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:41.665427    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:41.665770    4178 pod_ready.go:93] pod "kube-apiserver-ha-476000-m02" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:41.665780    4178 pod_ready.go:82] duration metric: took 399.068092ms for pod "kube-apiserver-ha-476000-m02" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:41.665789    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-476000-m03" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:41.861535    4178 request.go:632] Waited for 195.701622ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-476000-m03
	I0926 17:54:41.861634    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-476000-m03
	I0926 17:54:41.861648    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:41.861668    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:41.861680    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:41.865792    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:42.061777    4178 request.go:632] Waited for 195.542882ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000-m03
	I0926 17:54:42.061836    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m03
	I0926 17:54:42.061869    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:42.061880    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:42.061888    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:42.066352    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:42.066752    4178 pod_ready.go:98] node "ha-476000-m03" hosting pod "kube-apiserver-ha-476000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:42.066763    4178 pod_ready.go:82] duration metric: took 400.967857ms for pod "kube-apiserver-ha-476000-m03" in "kube-system" namespace to be "Ready" ...
	E0926 17:54:42.066770    4178 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-476000-m03" hosting pod "kube-apiserver-ha-476000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:42.066774    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-476000" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:42.260909    4178 request.go:632] Waited for 194.055971ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-476000
	I0926 17:54:42.260962    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-476000
	I0926 17:54:42.261001    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:42.261021    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:42.261031    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:42.264905    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:42.460758    4178 request.go:632] Waited for 195.327303ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:42.460808    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:42.460816    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:42.460827    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:42.460837    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:42.464434    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:42.464776    4178 pod_ready.go:93] pod "kube-controller-manager-ha-476000" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:42.464786    4178 pod_ready.go:82] duration metric: took 398.004555ms for pod "kube-controller-manager-ha-476000" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:42.464793    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-476000-m02" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:42.660316    4178 request.go:632] Waited for 195.46211ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-476000-m02
	I0926 17:54:42.660458    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-476000-m02
	I0926 17:54:42.660474    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:42.660486    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:42.660494    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:42.665327    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:42.860122    4178 request.go:632] Waited for 194.468161ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:42.860201    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:42.860211    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:42.860222    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:42.860231    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:42.864049    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:42.864456    4178 pod_ready.go:93] pod "kube-controller-manager-ha-476000-m02" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:42.864465    4178 pod_ready.go:82] duration metric: took 399.6655ms for pod "kube-controller-manager-ha-476000-m02" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:42.864473    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-476000-m03" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:43.060814    4178 request.go:632] Waited for 196.258122ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-476000-m03
	I0926 17:54:43.060925    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-476000-m03
	I0926 17:54:43.060935    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:43.060947    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:43.060956    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:43.065088    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:43.261824    4178 request.go:632] Waited for 196.351744ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000-m03
	I0926 17:54:43.261944    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m03
	I0926 17:54:43.261957    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:43.261967    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:43.261984    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:43.266272    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:43.266738    4178 pod_ready.go:98] node "ha-476000-m03" hosting pod "kube-controller-manager-ha-476000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:43.266748    4178 pod_ready.go:82] duration metric: took 402.268136ms for pod "kube-controller-manager-ha-476000-m03" in "kube-system" namespace to be "Ready" ...
	E0926 17:54:43.266762    4178 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-476000-m03" hosting pod "kube-controller-manager-ha-476000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:43.266768    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5d8nb" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:43.460501    4178 request.go:632] Waited for 193.687301ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5d8nb
	I0926 17:54:43.460615    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5d8nb
	I0926 17:54:43.460627    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:43.460639    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:43.460647    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:43.463846    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:43.662152    4178 request.go:632] Waited for 197.799796ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000-m04
	I0926 17:54:43.662296    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m04
	I0926 17:54:43.662312    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:43.662324    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:43.662334    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:43.666430    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:43.666928    4178 pod_ready.go:98] node "ha-476000-m04" hosting pod "kube-proxy-5d8nb" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m04" has status "Ready":"Unknown"
	I0926 17:54:43.666940    4178 pod_ready.go:82] duration metric: took 400.16396ms for pod "kube-proxy-5d8nb" in "kube-system" namespace to be "Ready" ...
	E0926 17:54:43.666946    4178 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-476000-m04" hosting pod "kube-proxy-5d8nb" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m04" has status "Ready":"Unknown"
	I0926 17:54:43.666950    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bpsqv" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:43.860782    4178 request.go:632] Waited for 193.758415ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bpsqv
	I0926 17:54:43.860851    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bpsqv
	I0926 17:54:43.860893    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:43.860905    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:43.860912    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:43.865061    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:44.060850    4178 request.go:632] Waited for 195.218122ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000-m03
	I0926 17:54:44.060902    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m03
	I0926 17:54:44.060920    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:44.060968    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:44.060976    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:44.065008    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:44.065426    4178 pod_ready.go:98] node "ha-476000-m03" hosting pod "kube-proxy-bpsqv" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:44.065437    4178 pod_ready.go:82] duration metric: took 398.480723ms for pod "kube-proxy-bpsqv" in "kube-system" namespace to be "Ready" ...
	E0926 17:54:44.065443    4178 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-476000-m03" hosting pod "kube-proxy-bpsqv" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:44.065448    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ctdh4" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:44.260264    4178 request.go:632] Waited for 194.757329ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ctdh4
	I0926 17:54:44.260395    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ctdh4
	I0926 17:54:44.260404    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:44.260417    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:44.260424    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:44.264668    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:44.461295    4178 request.go:632] Waited for 196.119983ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:44.461373    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:44.461384    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:44.461399    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:44.461407    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:44.465035    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:44.465397    4178 pod_ready.go:93] pod "kube-proxy-ctdh4" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:44.465406    4178 pod_ready.go:82] duration metric: took 399.951689ms for pod "kube-proxy-ctdh4" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:44.465413    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nrsx7" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:44.660616    4178 request.go:632] Waited for 195.1575ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrsx7
	I0926 17:54:44.660704    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrsx7
	I0926 17:54:44.660715    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:44.660726    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:44.660734    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:44.664476    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:44.860447    4178 request.go:632] Waited for 195.571151ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:44.860565    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:44.860578    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:44.860588    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:44.860596    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:44.864038    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:44.864554    4178 pod_ready.go:93] pod "kube-proxy-nrsx7" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:44.864566    4178 pod_ready.go:82] duration metric: took 399.145507ms for pod "kube-proxy-nrsx7" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:44.864575    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-476000" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:45.060924    4178 request.go:632] Waited for 196.301993ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-476000
	I0926 17:54:45.061011    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-476000
	I0926 17:54:45.061022    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:45.061034    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:45.061042    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:45.065277    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:45.260098    4178 request.go:632] Waited for 194.412657ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:45.260187    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:45.260208    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:45.260220    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:45.260229    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:45.264296    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:45.264558    4178 pod_ready.go:93] pod "kube-scheduler-ha-476000" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:45.264567    4178 pod_ready.go:82] duration metric: took 399.984402ms for pod "kube-scheduler-ha-476000" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:45.264574    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-476000-m02" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:45.460204    4178 request.go:632] Waited for 195.586272ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-476000-m02
	I0926 17:54:45.460285    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-476000-m02
	I0926 17:54:45.460296    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:45.460307    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:45.460315    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:45.463717    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:45.661528    4178 request.go:632] Waited for 197.284014ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:45.661624    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:45.661634    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:45.661645    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:45.661653    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:45.666080    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:45.666323    4178 pod_ready.go:93] pod "kube-scheduler-ha-476000-m02" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:45.666333    4178 pod_ready.go:82] duration metric: took 401.752851ms for pod "kube-scheduler-ha-476000-m02" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:45.666340    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-476000-m03" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:45.860703    4178 request.go:632] Waited for 194.311899ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-476000-m03
	I0926 17:54:45.860736    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-476000-m03
	I0926 17:54:45.860740    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:45.860746    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:45.860750    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:45.863521    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:46.061792    4178 request.go:632] Waited for 197.829608ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000-m03
	I0926 17:54:46.061901    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m03
	I0926 17:54:46.061915    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:46.061926    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:46.061934    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:46.065839    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:46.066244    4178 pod_ready.go:98] node "ha-476000-m03" hosting pod "kube-scheduler-ha-476000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:46.066255    4178 pod_ready.go:82] duration metric: took 399.908641ms for pod "kube-scheduler-ha-476000-m03" in "kube-system" namespace to be "Ready" ...
	E0926 17:54:46.066262    4178 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-476000-m03" hosting pod "kube-scheduler-ha-476000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:46.066267    4178 pod_ready.go:39] duration metric: took 41.222971189s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0926 17:54:46.066282    4178 api_server.go:52] waiting for apiserver process to appear ...
	I0926 17:54:46.066375    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 17:54:46.079414    4178 logs.go:276] 2 containers: [7aabe646a9fe 3fae3e334c04]
	I0926 17:54:46.079513    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 17:54:46.092379    4178 logs.go:276] 2 containers: [f525de12dc97 bcf266ec7564]
	I0926 17:54:46.092476    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 17:54:46.105011    4178 logs.go:276] 0 containers: []
	W0926 17:54:46.105025    4178 logs.go:278] No container was found matching "coredns"
	I0926 17:54:46.105107    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 17:54:46.118452    4178 logs.go:276] 2 containers: [d9fc15fd82f1 d8ed18933791]
	I0926 17:54:46.118550    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 17:54:46.132316    4178 logs.go:276] 2 containers: [b52376c9cdcd a5a9a7e18064]
	I0926 17:54:46.132402    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 17:54:46.145649    4178 logs.go:276] 2 containers: [350a07550ad1 f82271bde6b0]
	I0926 17:54:46.145746    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 17:54:46.160399    4178 logs.go:276] 2 containers: [b73a2fda347d 981336ad66c6]
	I0926 17:54:46.160426    4178 logs.go:123] Gathering logs for kube-apiserver [7aabe646a9fe] ...
	I0926 17:54:46.160432    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aabe646a9fe"
	I0926 17:54:46.180676    4178 logs.go:123] Gathering logs for kube-apiserver [3fae3e334c04] ...
	I0926 17:54:46.180690    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fae3e334c04"
	I0926 17:54:46.213941    4178 logs.go:123] Gathering logs for kindnet [981336ad66c6] ...
	I0926 17:54:46.213956    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 981336ad66c6"
	I0926 17:54:46.229008    4178 logs.go:123] Gathering logs for container status ...
	I0926 17:54:46.229022    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 17:54:46.263727    4178 logs.go:123] Gathering logs for dmesg ...
	I0926 17:54:46.263743    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 17:54:46.275216    4178 logs.go:123] Gathering logs for etcd [f525de12dc97] ...
	I0926 17:54:46.275229    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f525de12dc97"
	I0926 17:54:46.340546    4178 logs.go:123] Gathering logs for etcd [bcf266ec7564] ...
	I0926 17:54:46.340563    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcf266ec7564"
	I0926 17:54:46.368786    4178 logs.go:123] Gathering logs for kube-scheduler [d8ed18933791] ...
	I0926 17:54:46.368802    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ed18933791"
	I0926 17:54:46.392911    4178 logs.go:123] Gathering logs for kube-proxy [a5a9a7e18064] ...
	I0926 17:54:46.392926    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5a9a7e18064"
	I0926 17:54:46.411685    4178 logs.go:123] Gathering logs for kubelet ...
	I0926 17:54:46.411700    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 17:54:46.453572    4178 logs.go:123] Gathering logs for describe nodes ...
	I0926 17:54:46.453588    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 17:54:46.819319    4178 logs.go:123] Gathering logs for kube-scheduler [d9fc15fd82f1] ...
	I0926 17:54:46.819338    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9fc15fd82f1"
	I0926 17:54:46.834299    4178 logs.go:123] Gathering logs for kube-proxy [b52376c9cdcd] ...
	I0926 17:54:46.834315    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b52376c9cdcd"
	I0926 17:54:46.850264    4178 logs.go:123] Gathering logs for Docker ...
	I0926 17:54:46.850278    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 17:54:46.881220    4178 logs.go:123] Gathering logs for kube-controller-manager [350a07550ad1] ...
	I0926 17:54:46.881233    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 350a07550ad1"
	I0926 17:54:46.915123    4178 logs.go:123] Gathering logs for kube-controller-manager [f82271bde6b0] ...
	I0926 17:54:46.915139    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f82271bde6b0"
	I0926 17:54:46.943154    4178 logs.go:123] Gathering logs for kindnet [b73a2fda347d] ...
	I0926 17:54:46.943169    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b73a2fda347d"
	I0926 17:54:49.459929    4178 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 17:54:49.472910    4178 api_server.go:72] duration metric: took 1m9.339247453s to wait for apiserver process to appear ...
	I0926 17:54:49.472923    4178 api_server.go:88] waiting for apiserver healthz status ...
	I0926 17:54:49.473016    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 17:54:49.489783    4178 logs.go:276] 2 containers: [7aabe646a9fe 3fae3e334c04]
	I0926 17:54:49.489876    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 17:54:49.503069    4178 logs.go:276] 2 containers: [f525de12dc97 bcf266ec7564]
	I0926 17:54:49.503157    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 17:54:49.514340    4178 logs.go:276] 0 containers: []
	W0926 17:54:49.514353    4178 logs.go:278] No container was found matching "coredns"
	I0926 17:54:49.514430    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 17:54:49.528690    4178 logs.go:276] 2 containers: [d9fc15fd82f1 d8ed18933791]
	I0926 17:54:49.528782    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 17:54:49.540774    4178 logs.go:276] 2 containers: [b52376c9cdcd a5a9a7e18064]
	I0926 17:54:49.540870    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 17:54:49.553605    4178 logs.go:276] 2 containers: [350a07550ad1 f82271bde6b0]
	I0926 17:54:49.553693    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 17:54:49.566939    4178 logs.go:276] 2 containers: [b73a2fda347d 981336ad66c6]
	I0926 17:54:49.566961    4178 logs.go:123] Gathering logs for kindnet [981336ad66c6] ...
	I0926 17:54:49.566967    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 981336ad66c6"
	I0926 17:54:49.584163    4178 logs.go:123] Gathering logs for etcd [bcf266ec7564] ...
	I0926 17:54:49.584179    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcf266ec7564"
	I0926 17:54:49.608092    4178 logs.go:123] Gathering logs for kube-controller-manager [f82271bde6b0] ...
	I0926 17:54:49.608107    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f82271bde6b0"
	I0926 17:54:49.640526    4178 logs.go:123] Gathering logs for etcd [f525de12dc97] ...
	I0926 17:54:49.640542    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f525de12dc97"
	I0926 17:54:49.707920    4178 logs.go:123] Gathering logs for kube-scheduler [d9fc15fd82f1] ...
	I0926 17:54:49.707937    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9fc15fd82f1"
	I0926 17:54:49.725537    4178 logs.go:123] Gathering logs for kube-proxy [b52376c9cdcd] ...
	I0926 17:54:49.725551    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b52376c9cdcd"
	I0926 17:54:49.747118    4178 logs.go:123] Gathering logs for kube-proxy [a5a9a7e18064] ...
	I0926 17:54:49.747134    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5a9a7e18064"
	I0926 17:54:49.763059    4178 logs.go:123] Gathering logs for kindnet [b73a2fda347d] ...
	I0926 17:54:49.763073    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b73a2fda347d"
	I0926 17:54:49.780606    4178 logs.go:123] Gathering logs for container status ...
	I0926 17:54:49.780619    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 17:54:49.815474    4178 logs.go:123] Gathering logs for kubelet ...
	I0926 17:54:49.815490    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 17:54:49.856341    4178 logs.go:123] Gathering logs for kube-apiserver [3fae3e334c04] ...
	I0926 17:54:49.856359    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fae3e334c04"
	I0926 17:54:49.895001    4178 logs.go:123] Gathering logs for kube-apiserver [7aabe646a9fe] ...
	I0926 17:54:49.895016    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aabe646a9fe"
	I0926 17:54:49.915291    4178 logs.go:123] Gathering logs for kube-scheduler [d8ed18933791] ...
	I0926 17:54:49.915307    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ed18933791"
	I0926 17:54:49.931682    4178 logs.go:123] Gathering logs for kube-controller-manager [350a07550ad1] ...
	I0926 17:54:49.931698    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 350a07550ad1"
	I0926 17:54:49.962905    4178 logs.go:123] Gathering logs for Docker ...
	I0926 17:54:49.962920    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 17:54:49.995739    4178 logs.go:123] Gathering logs for dmesg ...
	I0926 17:54:49.995756    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 17:54:50.006748    4178 logs.go:123] Gathering logs for describe nodes ...
	I0926 17:54:50.006764    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 17:54:52.683223    4178 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0926 17:54:52.688111    4178 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0926 17:54:52.688148    4178 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0926 17:54:52.688152    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:52.688158    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:52.688162    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:52.688774    4178 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0926 17:54:52.688866    4178 api_server.go:141] control plane version: v1.31.1
	I0926 17:54:52.688877    4178 api_server.go:131] duration metric: took 3.215937625s to wait for apiserver health ...
	I0926 17:54:52.688882    4178 system_pods.go:43] waiting for kube-system pods to appear ...
	I0926 17:54:52.688964    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 17:54:52.702208    4178 logs.go:276] 2 containers: [7aabe646a9fe 3fae3e334c04]
	I0926 17:54:52.702296    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 17:54:52.716057    4178 logs.go:276] 2 containers: [f525de12dc97 bcf266ec7564]
	I0926 17:54:52.716146    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 17:54:52.730288    4178 logs.go:276] 0 containers: []
	W0926 17:54:52.730303    4178 logs.go:278] No container was found matching "coredns"
	I0926 17:54:52.730387    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 17:54:52.744133    4178 logs.go:276] 2 containers: [d9fc15fd82f1 d8ed18933791]
	I0926 17:54:52.744229    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 17:54:52.757357    4178 logs.go:276] 2 containers: [b52376c9cdcd a5a9a7e18064]
	I0926 17:54:52.757447    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 17:54:52.770397    4178 logs.go:276] 2 containers: [350a07550ad1 f82271bde6b0]
	I0926 17:54:52.770488    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 17:54:52.783588    4178 logs.go:276] 2 containers: [b73a2fda347d 981336ad66c6]
	I0926 17:54:52.783609    4178 logs.go:123] Gathering logs for dmesg ...
	I0926 17:54:52.783615    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 17:54:52.794149    4178 logs.go:123] Gathering logs for kube-scheduler [d9fc15fd82f1] ...
	I0926 17:54:52.794162    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9fc15fd82f1"
	I0926 17:54:52.810239    4178 logs.go:123] Gathering logs for kube-proxy [a5a9a7e18064] ...
	I0926 17:54:52.810253    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5a9a7e18064"
	I0926 17:54:52.828364    4178 logs.go:123] Gathering logs for kube-controller-manager [350a07550ad1] ...
	I0926 17:54:52.828379    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 350a07550ad1"
	I0926 17:54:52.859712    4178 logs.go:123] Gathering logs for kindnet [b73a2fda347d] ...
	I0926 17:54:52.859726    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b73a2fda347d"
	I0926 17:54:52.877881    4178 logs.go:123] Gathering logs for container status ...
	I0926 17:54:52.877898    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 17:54:52.920788    4178 logs.go:123] Gathering logs for kube-scheduler [d8ed18933791] ...
	I0926 17:54:52.920802    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ed18933791"
	I0926 17:54:52.937686    4178 logs.go:123] Gathering logs for Docker ...
	I0926 17:54:52.937700    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 17:54:52.970435    4178 logs.go:123] Gathering logs for kubelet ...
	I0926 17:54:52.970449    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 17:54:53.015652    4178 logs.go:123] Gathering logs for describe nodes ...
	I0926 17:54:53.015669    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 17:54:53.184377    4178 logs.go:123] Gathering logs for etcd [f525de12dc97] ...
	I0926 17:54:53.184391    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f525de12dc97"
	I0926 17:54:53.249067    4178 logs.go:123] Gathering logs for etcd [bcf266ec7564] ...
	I0926 17:54:53.249083    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcf266ec7564"
	I0926 17:54:53.274003    4178 logs.go:123] Gathering logs for kube-controller-manager [f82271bde6b0] ...
	I0926 17:54:53.274019    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f82271bde6b0"
	I0926 17:54:53.300047    4178 logs.go:123] Gathering logs for kube-apiserver [7aabe646a9fe] ...
	I0926 17:54:53.300062    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aabe646a9fe"
	I0926 17:54:53.321481    4178 logs.go:123] Gathering logs for kube-apiserver [3fae3e334c04] ...
	I0926 17:54:53.321495    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fae3e334c04"
	I0926 17:54:53.356023    4178 logs.go:123] Gathering logs for kube-proxy [b52376c9cdcd] ...
	I0926 17:54:53.356038    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b52376c9cdcd"
	I0926 17:54:53.374219    4178 logs.go:123] Gathering logs for kindnet [981336ad66c6] ...
	I0926 17:54:53.374233    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 981336ad66c6"
	I0926 17:54:55.893460    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0926 17:54:55.893486    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:55.893529    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:55.893539    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:55.899854    4178 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0926 17:54:55.904904    4178 system_pods.go:59] 26 kube-system pods found
	I0926 17:54:55.904920    4178 system_pods.go:61] "coredns-7c65d6cfc9-44l9n" [8009053f-fc43-43ba-a44e-fd1d53e88617] Running
	I0926 17:54:55.904925    4178 system_pods.go:61] "coredns-7c65d6cfc9-7jwgv" [20fb38a0-b993-41b3-9c91-98802d75ef47] Running
	I0926 17:54:55.904928    4178 system_pods.go:61] "etcd-ha-476000" [469cfe32-fa29-4816-afd7-3f115a3a6c67] Running
	I0926 17:54:55.904930    4178 system_pods.go:61] "etcd-ha-476000-m02" [d4c3ec0c-a151-4047-9a30-93cae10d24a0] Running
	I0926 17:54:55.904933    4178 system_pods.go:61] "etcd-ha-476000-m03" [e9166a94-af57-49ec-91a3-dc0c36083ca4] Running
	I0926 17:54:55.904936    4178 system_pods.go:61] "kindnet-44vxl" [488a3806-d7c1-4397-bff8-00d9ea3cb984] Running
	I0926 17:54:55.904938    4178 system_pods.go:61] "kindnet-4pnxr" [23b10d5b-fd63-4284-9c44-413b8bf80354] Running
	I0926 17:54:55.904941    4178 system_pods.go:61] "kindnet-hhrtc" [7ac457e3-ae85-4314-99dc-da37f5875807] Running
	I0926 17:54:55.904943    4178 system_pods.go:61] "kindnet-lgj66" [63fc5d71-c403-40d7-85a7-6ff48e307a79] Running
	I0926 17:54:55.904946    4178 system_pods.go:61] "kube-apiserver-ha-476000" [5c5e9fb4-0cda-4892-bae5-e51e839d2573] Running
	I0926 17:54:55.904948    4178 system_pods.go:61] "kube-apiserver-ha-476000-m02" [2d6d0355-152b-4be4-b810-f1c80c9ef24f] Running
	I0926 17:54:55.904951    4178 system_pods.go:61] "kube-apiserver-ha-476000-m03" [3ce79eb3-635c-4632-82e0-7240a7d49c50] Running
	I0926 17:54:55.904954    4178 system_pods.go:61] "kube-controller-manager-ha-476000" [9edef370-886e-4260-a3d3-99be564d90c6] Running
	I0926 17:54:55.904957    4178 system_pods.go:61] "kube-controller-manager-ha-476000-m02" [5eeed951-eaba-436f-9f80-8f68cb50425d] Running
	I0926 17:54:55.904960    4178 system_pods.go:61] "kube-controller-manager-ha-476000-m03" [e60f68a5-a353-4501-81c1-ac7d76820373] Running
	I0926 17:54:55.904962    4178 system_pods.go:61] "kube-proxy-5d8nb" [1e9c0178-2d58-4ca1-9fbe-c8e54d91bf1a] Running
	I0926 17:54:55.904965    4178 system_pods.go:61] "kube-proxy-bpsqv" [c264b112-c037-4332-9c47-2561ecb928f1] Running
	I0926 17:54:55.904967    4178 system_pods.go:61] "kube-proxy-ctdh4" [9a21a19a-5cad-4eb9-97f3-4c654fbbe59b] Running
	I0926 17:54:55.904970    4178 system_pods.go:61] "kube-proxy-nrsx7" [14b031d0-b044-4500-8cc1-1397c83c1886] Running
	I0926 17:54:55.904973    4178 system_pods.go:61] "kube-scheduler-ha-476000" [ef0e308c-1b28-413c-846e-24935489434d] Running
	I0926 17:54:55.904976    4178 system_pods.go:61] "kube-scheduler-ha-476000-m02" [2b75a7bc-a19c-4aeb-8521-10bd963cacbb] Running
	I0926 17:54:55.904978    4178 system_pods.go:61] "kube-scheduler-ha-476000-m03" [206cbf3f-bff1-4164-a622-3be7593c08ac] Running
	I0926 17:54:55.904981    4178 system_pods.go:61] "kube-vip-ha-476000" [376a9128-fe3c-41aa-b26c-da921ae20e68] Running
	I0926 17:54:55.904997    4178 system_pods.go:61] "kube-vip-ha-476000-m02" [7e907e74-7f15-4c04-b463-682a14766e66] Running
	I0926 17:54:55.905002    4178 system_pods.go:61] "kube-vip-ha-476000-m03" [bf2e3b8c-cf42-45b0-a4d7-a32b820bab21] Running
	I0926 17:54:55.905005    4178 system_pods.go:61] "storage-provisioner" [e3e367a7-6cda-4177-a81d-7897333308d7] Running
	I0926 17:54:55.905009    4178 system_pods.go:74] duration metric: took 3.216111125s to wait for pod list to return data ...
	I0926 17:54:55.905015    4178 default_sa.go:34] waiting for default service account to be created ...
	I0926 17:54:55.905062    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0926 17:54:55.905068    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:55.905073    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:55.905077    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:55.907842    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:55.908016    4178 default_sa.go:45] found service account: "default"
	I0926 17:54:55.908026    4178 default_sa.go:55] duration metric: took 3.006211ms for default service account to be created ...
	I0926 17:54:55.908031    4178 system_pods.go:116] waiting for k8s-apps to be running ...
	I0926 17:54:55.908061    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0926 17:54:55.908066    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:55.908071    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:55.908075    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:55.912026    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:55.917054    4178 system_pods.go:86] 26 kube-system pods found
	I0926 17:54:55.917066    4178 system_pods.go:89] "coredns-7c65d6cfc9-44l9n" [8009053f-fc43-43ba-a44e-fd1d53e88617] Running
	I0926 17:54:55.917070    4178 system_pods.go:89] "coredns-7c65d6cfc9-7jwgv" [20fb38a0-b993-41b3-9c91-98802d75ef47] Running
	I0926 17:54:55.917073    4178 system_pods.go:89] "etcd-ha-476000" [469cfe32-fa29-4816-afd7-3f115a3a6c67] Running
	I0926 17:54:55.917076    4178 system_pods.go:89] "etcd-ha-476000-m02" [d4c3ec0c-a151-4047-9a30-93cae10d24a0] Running
	I0926 17:54:55.917080    4178 system_pods.go:89] "etcd-ha-476000-m03" [e9166a94-af57-49ec-91a3-dc0c36083ca4] Running
	I0926 17:54:55.917083    4178 system_pods.go:89] "kindnet-44vxl" [488a3806-d7c1-4397-bff8-00d9ea3cb984] Running
	I0926 17:54:55.917085    4178 system_pods.go:89] "kindnet-4pnxr" [23b10d5b-fd63-4284-9c44-413b8bf80354] Running
	I0926 17:54:55.917088    4178 system_pods.go:89] "kindnet-hhrtc" [7ac457e3-ae85-4314-99dc-da37f5875807] Running
	I0926 17:54:55.917091    4178 system_pods.go:89] "kindnet-lgj66" [63fc5d71-c403-40d7-85a7-6ff48e307a79] Running
	I0926 17:54:55.917094    4178 system_pods.go:89] "kube-apiserver-ha-476000" [5c5e9fb4-0cda-4892-bae5-e51e839d2573] Running
	I0926 17:54:55.917097    4178 system_pods.go:89] "kube-apiserver-ha-476000-m02" [2d6d0355-152b-4be4-b810-f1c80c9ef24f] Running
	I0926 17:54:55.917100    4178 system_pods.go:89] "kube-apiserver-ha-476000-m03" [3ce79eb3-635c-4632-82e0-7240a7d49c50] Running
	I0926 17:54:55.917103    4178 system_pods.go:89] "kube-controller-manager-ha-476000" [9edef370-886e-4260-a3d3-99be564d90c6] Running
	I0926 17:54:55.917106    4178 system_pods.go:89] "kube-controller-manager-ha-476000-m02" [5eeed951-eaba-436f-9f80-8f68cb50425d] Running
	I0926 17:54:55.917110    4178 system_pods.go:89] "kube-controller-manager-ha-476000-m03" [e60f68a5-a353-4501-81c1-ac7d76820373] Running
	I0926 17:54:55.917113    4178 system_pods.go:89] "kube-proxy-5d8nb" [1e9c0178-2d58-4ca1-9fbe-c8e54d91bf1a] Running
	I0926 17:54:55.917116    4178 system_pods.go:89] "kube-proxy-bpsqv" [c264b112-c037-4332-9c47-2561ecb928f1] Running
	I0926 17:54:55.917123    4178 system_pods.go:89] "kube-proxy-ctdh4" [9a21a19a-5cad-4eb9-97f3-4c654fbbe59b] Running
	I0926 17:54:55.917126    4178 system_pods.go:89] "kube-proxy-nrsx7" [14b031d0-b044-4500-8cc1-1397c83c1886] Running
	I0926 17:54:55.917129    4178 system_pods.go:89] "kube-scheduler-ha-476000" [ef0e308c-1b28-413c-846e-24935489434d] Running
	I0926 17:54:55.917132    4178 system_pods.go:89] "kube-scheduler-ha-476000-m02" [2b75a7bc-a19c-4aeb-8521-10bd963cacbb] Running
	I0926 17:54:55.917135    4178 system_pods.go:89] "kube-scheduler-ha-476000-m03" [206cbf3f-bff1-4164-a622-3be7593c08ac] Running
	I0926 17:54:55.917138    4178 system_pods.go:89] "kube-vip-ha-476000" [376a9128-fe3c-41aa-b26c-da921ae20e68] Running
	I0926 17:54:55.917140    4178 system_pods.go:89] "kube-vip-ha-476000-m02" [7e907e74-7f15-4c04-b463-682a14766e66] Running
	I0926 17:54:55.917144    4178 system_pods.go:89] "kube-vip-ha-476000-m03" [bf2e3b8c-cf42-45b0-a4d7-a32b820bab21] Running
	I0926 17:54:55.917146    4178 system_pods.go:89] "storage-provisioner" [e3e367a7-6cda-4177-a81d-7897333308d7] Running
	I0926 17:54:55.917151    4178 system_pods.go:126] duration metric: took 9.116472ms to wait for k8s-apps to be running ...
	I0926 17:54:55.917160    4178 system_svc.go:44] waiting for kubelet service to be running ....
	I0926 17:54:55.917225    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 17:54:55.928854    4178 system_svc.go:56] duration metric: took 11.69353ms WaitForService to wait for kubelet
	I0926 17:54:55.928867    4178 kubeadm.go:582] duration metric: took 1m15.795183486s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 17:54:55.928878    4178 node_conditions.go:102] verifying NodePressure condition ...
	I0926 17:54:55.928918    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0926 17:54:55.928924    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:55.928930    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:55.928933    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:55.932146    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:55.933143    4178 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0926 17:54:55.933159    4178 node_conditions.go:123] node cpu capacity is 2
	I0926 17:54:55.933173    4178 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0926 17:54:55.933176    4178 node_conditions.go:123] node cpu capacity is 2
	I0926 17:54:55.933181    4178 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0926 17:54:55.933183    4178 node_conditions.go:123] node cpu capacity is 2
	I0926 17:54:55.933186    4178 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0926 17:54:55.933190    4178 node_conditions.go:123] node cpu capacity is 2
	I0926 17:54:55.933193    4178 node_conditions.go:105] duration metric: took 4.311525ms to run NodePressure ...
	I0926 17:54:55.933202    4178 start.go:241] waiting for startup goroutines ...
	I0926 17:54:55.933219    4178 start.go:255] writing updated cluster config ...
	I0926 17:54:55.954947    4178 out.go:201] 
	I0926 17:54:55.975717    4178 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:54:55.975787    4178 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/config.json ...
	I0926 17:54:55.997338    4178 out.go:177] * Starting "ha-476000-m03" control-plane node in "ha-476000" cluster
	I0926 17:54:56.055744    4178 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 17:54:56.055778    4178 cache.go:56] Caching tarball of preloaded images
	I0926 17:54:56.056007    4178 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0926 17:54:56.056029    4178 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 17:54:56.056173    4178 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/config.json ...
	I0926 17:54:56.057121    4178 start.go:360] acquireMachinesLock for ha-476000-m03: {Name:mk62b0ec9b6170d1971239573141a625c4a2621e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 17:54:56.057290    4178 start.go:364] duration metric: took 139.967µs to acquireMachinesLock for "ha-476000-m03"
	I0926 17:54:56.057321    4178 start.go:96] Skipping create...Using existing machine configuration
	I0926 17:54:56.057331    4178 fix.go:54] fixHost starting: m03
	I0926 17:54:56.057738    4178 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:54:56.057766    4178 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:54:56.066973    4178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52106
	I0926 17:54:56.067348    4178 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:54:56.067691    4178 main.go:141] libmachine: Using API Version  1
	I0926 17:54:56.067705    4178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:54:56.067918    4178 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:54:56.068036    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	I0926 17:54:56.068122    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetState
	I0926 17:54:56.068201    4178 main.go:141] libmachine: (ha-476000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:54:56.068289    4178 main.go:141] libmachine: (ha-476000-m03) DBG | hyperkit pid from json: 3537
	I0926 17:54:56.069219    4178 main.go:141] libmachine: (ha-476000-m03) DBG | hyperkit pid 3537 missing from process table
	I0926 17:54:56.069237    4178 fix.go:112] recreateIfNeeded on ha-476000-m03: state=Stopped err=<nil>
	I0926 17:54:56.069245    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	W0926 17:54:56.069331    4178 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 17:54:56.090482    4178 out.go:177] * Restarting existing hyperkit VM for "ha-476000-m03" ...
	I0926 17:54:56.132629    4178 main.go:141] libmachine: (ha-476000-m03) Calling .Start
	I0926 17:54:56.132887    4178 main.go:141] libmachine: (ha-476000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:54:56.132957    4178 main.go:141] libmachine: (ha-476000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/hyperkit.pid
	I0926 17:54:56.134746    4178 main.go:141] libmachine: (ha-476000-m03) DBG | hyperkit pid 3537 missing from process table
	I0926 17:54:56.134764    4178 main.go:141] libmachine: (ha-476000-m03) DBG | pid 3537 is in state "Stopped"
	I0926 17:54:56.134782    4178 main.go:141] libmachine: (ha-476000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/hyperkit.pid...
	I0926 17:54:56.135225    4178 main.go:141] libmachine: (ha-476000-m03) DBG | Using UUID 91a51069-a363-4c64-acd8-a07fa14dbb0d
	I0926 17:54:56.162007    4178 main.go:141] libmachine: (ha-476000-m03) DBG | Generated MAC 66:6f:5a:2d:e2:16
	I0926 17:54:56.162027    4178 main.go:141] libmachine: (ha-476000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000
	I0926 17:54:56.162143    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"91a51069-a363-4c64-acd8-a07fa14dbb0d", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003accc0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0926 17:54:56.162181    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"91a51069-a363-4c64-acd8-a07fa14dbb0d", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003accc0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0926 17:54:56.162253    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "91a51069-a363-4c64-acd8-a07fa14dbb0d", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/ha-476000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/bzimage,/Users/jenkins/minikube-integration/19711-1128/.minikube/machine
s/ha-476000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000"}
	I0926 17:54:56.162300    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 91a51069-a363-4c64-acd8-a07fa14dbb0d -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/ha-476000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/bzimage,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000"
	I0926 17:54:56.162312    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0926 17:54:56.163637    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 DEBUG: hyperkit: Pid is 4226
	I0926 17:54:56.164043    4178 main.go:141] libmachine: (ha-476000-m03) DBG | Attempt 0
	I0926 17:54:56.164071    4178 main.go:141] libmachine: (ha-476000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:54:56.164140    4178 main.go:141] libmachine: (ha-476000-m03) DBG | hyperkit pid from json: 4226
	I0926 17:54:56.166126    4178 main.go:141] libmachine: (ha-476000-m03) DBG | Searching for 66:6f:5a:2d:e2:16 in /var/db/dhcpd_leases ...
	I0926 17:54:56.166206    4178 main.go:141] libmachine: (ha-476000-m03) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0926 17:54:56.166235    4178 main.go:141] libmachine: (ha-476000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 17:54:56.166254    4178 main.go:141] libmachine: (ha-476000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 17:54:56.166288    4178 main.go:141] libmachine: (ha-476000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 17:54:56.166308    4178 main.go:141] libmachine: (ha-476000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f7515c}
	I0926 17:54:56.166318    4178 main.go:141] libmachine: (ha-476000-m03) DBG | Found match: 66:6f:5a:2d:e2:16
	I0926 17:54:56.166327    4178 main.go:141] libmachine: (ha-476000-m03) DBG | IP: 192.169.0.7
	I0926 17:54:56.166332    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetConfigRaw
	I0926 17:54:56.166976    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetIP
	I0926 17:54:56.167202    4178 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/config.json ...
	I0926 17:54:56.167675    4178 machine.go:93] provisionDockerMachine start ...
	I0926 17:54:56.167686    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	I0926 17:54:56.167814    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:54:56.167961    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:54:56.168088    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:54:56.168207    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:54:56.168321    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:54:56.168450    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:54:56.168613    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0926 17:54:56.168622    4178 main.go:141] libmachine: About to run SSH command:
	hostname
	I0926 17:54:56.172038    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I0926 17:54:56.180188    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0926 17:54:56.181229    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 17:54:56.181258    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 17:54:56.181274    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 17:54:56.181290    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 17:54:56.563523    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0926 17:54:56.563541    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0926 17:54:56.678338    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 17:54:56.678355    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 17:54:56.678363    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 17:54:56.678373    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 17:54:56.679203    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0926 17:54:56.679212    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0926 17:55:02.300815    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:55:02 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0926 17:55:02.300833    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:55:02 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0926 17:55:02.300855    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:55:02 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0926 17:55:02.325228    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:55:02 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0926 17:55:31.235618    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0926 17:55:31.235633    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetMachineName
	I0926 17:55:31.235773    4178 buildroot.go:166] provisioning hostname "ha-476000-m03"
	I0926 17:55:31.235783    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetMachineName
	I0926 17:55:31.235886    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:31.235992    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:31.236097    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.236189    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.236274    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:31.236414    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:55:31.236550    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0926 17:55:31.236559    4178 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-476000-m03 && echo "ha-476000-m03" | sudo tee /etc/hostname
	I0926 17:55:31.305642    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-476000-m03
	
	I0926 17:55:31.305657    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:31.305790    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:31.305908    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.306006    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.306089    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:31.306235    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:55:31.306383    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0926 17:55:31.306394    4178 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-476000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-476000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-476000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0926 17:55:31.369873    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 17:55:31.369889    4178 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19711-1128/.minikube CaCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19711-1128/.minikube}
	I0926 17:55:31.369903    4178 buildroot.go:174] setting up certificates
	I0926 17:55:31.369909    4178 provision.go:84] configureAuth start
	I0926 17:55:31.369916    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetMachineName
	I0926 17:55:31.370048    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetIP
	I0926 17:55:31.370147    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:31.370234    4178 provision.go:143] copyHostCerts
	I0926 17:55:31.370268    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem
	I0926 17:55:31.370317    4178 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem, removing ...
	I0926 17:55:31.370322    4178 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem
	I0926 17:55:31.370451    4178 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem (1123 bytes)
	I0926 17:55:31.370647    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem
	I0926 17:55:31.370676    4178 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem, removing ...
	I0926 17:55:31.370680    4178 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem
	I0926 17:55:31.370748    4178 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem (1675 bytes)
	I0926 17:55:31.370903    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem
	I0926 17:55:31.370932    4178 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem, removing ...
	I0926 17:55:31.370937    4178 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem
	I0926 17:55:31.371006    4178 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem (1082 bytes)
	I0926 17:55:31.371150    4178 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem org=jenkins.ha-476000-m03 san=[127.0.0.1 192.169.0.7 ha-476000-m03 localhost minikube]
	I0926 17:55:31.544988    4178 provision.go:177] copyRemoteCerts
	I0926 17:55:31.545045    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0926 17:55:31.545059    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:31.545196    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:31.545298    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.545402    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:31.545491    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/id_rsa Username:docker}
	I0926 17:55:31.580851    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0926 17:55:31.580928    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0926 17:55:31.601357    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0926 17:55:31.601440    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0926 17:55:31.621840    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0926 17:55:31.621921    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0926 17:55:31.641722    4178 provision.go:87] duration metric: took 271.803372ms to configureAuth
	I0926 17:55:31.641736    4178 buildroot.go:189] setting minikube options for container-runtime
	I0926 17:55:31.641909    4178 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:55:31.641923    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	I0926 17:55:31.642055    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:31.642148    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:31.642236    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.642329    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.642416    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:31.642531    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:55:31.642652    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0926 17:55:31.642659    4178 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0926 17:55:31.699187    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0926 17:55:31.699200    4178 buildroot.go:70] root file system type: tmpfs
	I0926 17:55:31.699283    4178 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0926 17:55:31.699296    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:31.699424    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:31.699525    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.699630    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.699725    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:31.699863    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:55:31.700007    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0926 17:55:31.700056    4178 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0926 17:55:31.769790    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0926 17:55:31.769808    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:31.769942    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:31.770041    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.770127    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.770216    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:31.770341    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:55:31.770484    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0926 17:55:31.770496    4178 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0926 17:55:33.400017    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0926 17:55:33.400032    4178 machine.go:96] duration metric: took 37.232210795s to provisionDockerMachine
	I0926 17:55:33.400040    4178 start.go:293] postStartSetup for "ha-476000-m03" (driver="hyperkit")
	I0926 17:55:33.400054    4178 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0926 17:55:33.400067    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	I0926 17:55:33.400257    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0926 17:55:33.400271    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:33.400365    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:33.400451    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:33.400540    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:33.400615    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/id_rsa Username:docker}
	I0926 17:55:33.437533    4178 ssh_runner.go:195] Run: cat /etc/os-release
	I0926 17:55:33.440663    4178 info.go:137] Remote host: Buildroot 2023.02.9
	I0926 17:55:33.440673    4178 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1128/.minikube/addons for local assets ...
	I0926 17:55:33.440763    4178 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1128/.minikube/files for local assets ...
	I0926 17:55:33.440901    4178 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> 16792.pem in /etc/ssl/certs
	I0926 17:55:33.440910    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> /etc/ssl/certs/16792.pem
	I0926 17:55:33.441066    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0926 17:55:33.449179    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem --> /etc/ssl/certs/16792.pem (1708 bytes)
	I0926 17:55:33.469328    4178 start.go:296] duration metric: took 69.278399ms for postStartSetup
	I0926 17:55:33.469350    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	I0926 17:55:33.469543    4178 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0926 17:55:33.469556    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:33.469645    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:33.469723    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:33.469812    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:33.469885    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/id_rsa Username:docker}
	I0926 17:55:33.505216    4178 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0926 17:55:33.505294    4178 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0926 17:55:33.540120    4178 fix.go:56] duration metric: took 37.482649135s for fixHost
	I0926 17:55:33.540150    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:33.540287    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:33.540382    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:33.540461    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:33.540540    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:33.540677    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:55:33.540816    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0926 17:55:33.540823    4178 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0926 17:55:33.598810    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727398533.714160628
	
	I0926 17:55:33.598825    4178 fix.go:216] guest clock: 1727398533.714160628
	I0926 17:55:33.598831    4178 fix.go:229] Guest: 2024-09-26 17:55:33.714160628 -0700 PDT Remote: 2024-09-26 17:55:33.540136 -0700 PDT m=+153.107512249 (delta=174.024628ms)
	I0926 17:55:33.598841    4178 fix.go:200] guest clock delta is within tolerance: 174.024628ms
	I0926 17:55:33.598846    4178 start.go:83] releasing machines lock for "ha-476000-m03", held for 37.541403544s
	I0926 17:55:33.598861    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	I0926 17:55:33.598984    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetIP
	I0926 17:55:33.620720    4178 out.go:177] * Found network options:
	I0926 17:55:33.640782    4178 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0926 17:55:33.662722    4178 proxy.go:119] fail to check proxy env: Error ip not in block
	W0926 17:55:33.662755    4178 proxy.go:119] fail to check proxy env: Error ip not in block
	I0926 17:55:33.662789    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	I0926 17:55:33.663752    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	I0926 17:55:33.664030    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	I0926 17:55:33.664220    4178 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0926 17:55:33.664265    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	W0926 17:55:33.664303    4178 proxy.go:119] fail to check proxy env: Error ip not in block
	W0926 17:55:33.664331    4178 proxy.go:119] fail to check proxy env: Error ip not in block
	I0926 17:55:33.664429    4178 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0926 17:55:33.664449    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:33.664488    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:33.664703    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:33.664719    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:33.664903    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:33.664932    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:33.665066    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/id_rsa Username:docker}
	I0926 17:55:33.665091    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:33.665207    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/id_rsa Username:docker}
	W0926 17:55:33.697895    4178 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0926 17:55:33.697966    4178 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0926 17:55:33.748934    4178 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0926 17:55:33.748959    4178 start.go:495] detecting cgroup driver to use...
	I0926 17:55:33.749065    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 17:55:33.765581    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0926 17:55:33.775502    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0926 17:55:33.785025    4178 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0926 17:55:33.785083    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0926 17:55:33.794919    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 17:55:33.804605    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0926 17:55:33.814324    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 17:55:33.824237    4178 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0926 17:55:33.832956    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0926 17:55:33.841773    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0926 17:55:33.851179    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0926 17:55:33.860818    4178 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0926 17:55:33.869929    4178 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0926 17:55:33.870002    4178 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0926 17:55:33.880612    4178 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0926 17:55:33.888804    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:55:33.989453    4178 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0926 17:55:34.008589    4178 start.go:495] detecting cgroup driver to use...
	I0926 17:55:34.008666    4178 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0926 17:55:34.033408    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 17:55:34.045976    4178 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0926 17:55:34.061768    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 17:55:34.072236    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 17:55:34.082936    4178 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0926 17:55:34.101453    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 17:55:34.111855    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 17:55:34.126151    4178 ssh_runner.go:195] Run: which cri-dockerd
	I0926 17:55:34.129207    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0926 17:55:34.136448    4178 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0926 17:55:34.149966    4178 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0926 17:55:34.247760    4178 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0926 17:55:34.364359    4178 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0926 17:55:34.364382    4178 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0926 17:55:34.380269    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:55:34.475811    4178 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0926 17:56:35.519197    4178 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.04314195s)
	I0926 17:56:35.519276    4178 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0926 17:56:35.552893    4178 out.go:201] 
	W0926 17:56:35.574257    4178 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 27 00:55:31 ha-476000-m03 systemd[1]: Starting Docker Application Container Engine...
	Sep 27 00:55:31 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:31.500016553Z" level=info msg="Starting up"
	Sep 27 00:55:31 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:31.500635723Z" level=info msg="containerd not running, starting managed containerd"
	Sep 27 00:55:31 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:31.501585462Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=510
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.515859502Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.530811327Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.530896497Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.530963742Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.530999016Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.531160593Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.531211393Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.531353040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.531394128Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.531431029Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.531461249Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.531611451Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.531854923Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.533401951Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.533446517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.533570107Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.533614884Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.533785548Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.533833312Z" level=info msg="metadata content store policy set" policy=shared
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537372044Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537425387Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537458961Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537519539Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537555242Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537622818Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537842730Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537922428Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537957588Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537987448Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538017362Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538049217Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538078685Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538107984Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538137843Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538167077Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538198997Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538230397Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538266484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538296944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538326105Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538358875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538390741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538420029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538458203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538495889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538528790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538561681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538590379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538618723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538647795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538678724Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538713636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538743343Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538771404Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538879453Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538923135Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538973990Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.539015313Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.539070453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.539103724Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.539133731Z" level=info msg="NRI interface is disabled by configuration."
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.539314481Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.539398768Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.539457208Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.539540620Z" level=info msg="containerd successfully booted in 0.024310s"
	Sep 27 00:55:32 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:32.523809928Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 27 00:55:32 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:32.557923590Z" level=info msg="Loading containers: start."
	Sep 27 00:55:32 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:32.687864975Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 27 00:55:32 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:32.754261548Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 27 00:55:33 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:33.488464069Z" level=info msg="Loading containers: done."
	Sep 27 00:55:33 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:33.495297411Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Sep 27 00:55:33 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:33.495333206Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Sep 27 00:55:33 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:33.495348892Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Sep 27 00:55:33 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:33.495450205Z" level=info msg="Daemon has completed initialization"
	Sep 27 00:55:33 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:33.514076327Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 27 00:55:33 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:33.514159018Z" level=info msg="API listen on [::]:2376"
	Sep 27 00:55:33 ha-476000-m03 systemd[1]: Started Docker Application Container Engine.
	Sep 27 00:55:34 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:34.603579868Z" level=info msg="Processing signal 'terminated'"
	Sep 27 00:55:34 ha-476000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Sep 27 00:55:34 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:34.604826953Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 27 00:55:34 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:34.605154827Z" level=info msg="Daemon shutdown complete"
	Sep 27 00:55:34 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:34.605194895Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 27 00:55:34 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:34.605243671Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 27 00:55:35 ha-476000-m03 systemd[1]: docker.service: Deactivated successfully.
	Sep 27 00:55:35 ha-476000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Sep 27 00:55:35 ha-476000-m03 systemd[1]: Starting Docker Application Container Engine...
	Sep 27 00:55:35 ha-476000-m03 dockerd[1093]: time="2024-09-27T00:55:35.644572631Z" level=info msg="Starting up"
	Sep 27 00:56:35 ha-476000-m03 dockerd[1093]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 27 00:56:35 ha-476000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 27 00:56:35 ha-476000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 27 00:56:35 ha-476000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0926 17:56:35.574334    4178 out.go:270] * 
	W0926 17:56:35.575462    4178 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 17:56:35.658842    4178 out.go:201] 
	
	
	==> Docker <==
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.206048904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.206179384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 00:54:01 ha-476000 cri-dockerd[1422]: time="2024-09-27T00:54:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ded079a0572139d8da280864d2cf23e26a7a74761427fdb6aa8247bf1b618b63/resolv.conf as [nameserver 192.169.0.1]"
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.465946902Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.465995187Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.466006348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.466074171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 00:54:01 ha-476000 cri-dockerd[1422]: time="2024-09-27T00:54:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ef132416f65d445e2be52f1f35d402e4103f11df5abe57373ffacf06538460a2/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 27 00:54:01 ha-476000 cri-dockerd[1422]: time="2024-09-27T00:54:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/82fb727d3b4ab9beb6771fe42b02b13cfa819ec6e94565fc85eb5e44849131dc/resolv.conf as [nameserver 192.169.0.1]"
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.953799067Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.953836835Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.953845219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.953903701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.967774874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.968202742Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.968237276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.968864557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 00:54:32 ha-476000 dockerd[1165]: time="2024-09-27T00:54:32.331720830Z" level=info msg="ignoring event" container=182d3576c4be84b985fdcac8be52d4bc1daa03061cca1c9af1ea1719cc87ef93 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:54:32 ha-476000 dockerd[1171]: time="2024-09-27T00:54:32.332359122Z" level=info msg="shim disconnected" id=182d3576c4be84b985fdcac8be52d4bc1daa03061cca1c9af1ea1719cc87ef93 namespace=moby
	Sep 27 00:54:32 ha-476000 dockerd[1171]: time="2024-09-27T00:54:32.332548493Z" level=warning msg="cleaning up after shim disconnected" id=182d3576c4be84b985fdcac8be52d4bc1daa03061cca1c9af1ea1719cc87ef93 namespace=moby
	Sep 27 00:54:32 ha-476000 dockerd[1171]: time="2024-09-27T00:54:32.332589783Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 27 00:54:47 ha-476000 dockerd[1171]: time="2024-09-27T00:54:47.288497270Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 27 00:54:47 ha-476000 dockerd[1171]: time="2024-09-27T00:54:47.289077983Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 27 00:54:47 ha-476000 dockerd[1171]: time="2024-09-27T00:54:47.289196082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 00:54:47 ha-476000 dockerd[1171]: time="2024-09-27T00:54:47.289608100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b05b1fc6dccd2       6e38f40d628db                                                                                         6 minutes ago       Running             storage-provisioner       2                   82fb727d3b4ab       storage-provisioner
	182d3576c4be8       6e38f40d628db                                                                                         7 minutes ago       Exited              storage-provisioner       1                   82fb727d3b4ab       storage-provisioner
	1e068209398d4       8c811b4aec35f                                                                                         7 minutes ago       Running             busybox                   1                   ef132416f65d4       busybox-7dff88458-bvjrf
	3ab08f3aed771       60c005f310ff3                                                                                         7 minutes ago       Running             kube-proxy                1                   ded079a057213       kube-proxy-nrsx7
	13b4ae2edced3       12968670680f4                                                                                         7 minutes ago       Running             kindnet-cni               1                   aedbce80ab870       kindnet-lgj66
	bd209bf19cc97       c69fa2e9cbf5f                                                                                         7 minutes ago       Running             coredns                   1                   78def8c2a71e9       coredns-7c65d6cfc9-7jwgv
	fa6222acd1314       c69fa2e9cbf5f                                                                                         7 minutes ago       Running             coredns                   1                   c557d11d235a0       coredns-7c65d6cfc9-44l9n
	87e465b7b95f5       6bab7719df100                                                                                         7 minutes ago       Running             kube-apiserver            2                   84bf5bfc1db95       kube-apiserver-ha-476000
	01c5e9b4fab08       175ffd71cce3d                                                                                         7 minutes ago       Running             kube-controller-manager   2                   7a8e5df4a06d2       kube-controller-manager-ha-476000
	e50b7f6d45d34       38af8ddebf499                                                                                         8 minutes ago       Running             kube-vip                  0                   9ff0bf9fa82a1       kube-vip-ha-476000
	e923cc80604d7       9aa1fad941575                                                                                         8 minutes ago       Running             kube-scheduler            1                   14ddb9d9f440b       kube-scheduler-ha-476000
	89ad0e203b827       2e96e5913fc06                                                                                         8 minutes ago       Running             etcd                      1                   28300cd77661a       etcd-ha-476000
	d6683f4746762       6bab7719df100                                                                                         8 minutes ago       Exited              kube-apiserver            1                   84bf5bfc1db95       kube-apiserver-ha-476000
	06a5f950d0b27       175ffd71cce3d                                                                                         8 minutes ago       Exited              kube-controller-manager   1                   7a8e5df4a06d2       kube-controller-manager-ha-476000
	0fe8d9cd2d8d2       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   16 minutes ago      Exited              busybox                   0                   58dc7b4f775bb       busybox-7dff88458-bvjrf
	6e7030dd2319d       c69fa2e9cbf5f                                                                                         18 minutes ago      Exited              coredns                   0                   19d1dd5324d2b       coredns-7c65d6cfc9-7jwgv
	325909e950c7b       c69fa2e9cbf5f                                                                                         18 minutes ago      Exited              coredns                   0                   4de17e21e7a0f       coredns-7c65d6cfc9-44l9n
	730d4ab163e72       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              19 minutes ago      Exited              kindnet-cni               0                   30119aa4fc19b       kindnet-lgj66
	2d1ef1d1af27d       60c005f310ff3                                                                                         19 minutes ago      Exited              kube-proxy                0                   581372b45e58a       kube-proxy-nrsx7
	8b01a83a0b098       9aa1fad941575                                                                                         19 minutes ago      Exited              kube-scheduler            0                   c0232eed71fc3       kube-scheduler-ha-476000
	c08f45a78a8ec       2e96e5913fc06                                                                                         19 minutes ago      Exited              etcd                      0                   ff9ea0993276b       etcd-ha-476000
	
	
	==> coredns [325909e950c7] <==
	[INFO] 10.244.0.4:41413 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000172004s
	[INFO] 10.244.0.4:39923 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000145289s
	[INFO] 10.244.0.4:55894 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000153357s
	[INFO] 10.244.0.4:52696 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000059737s
	[INFO] 10.244.1.2:45922 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00008915s
	[INFO] 10.244.1.2:44828 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000111301s
	[INFO] 10.244.1.2:53232 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000116513s
	[INFO] 10.244.2.2:38669 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109219s
	[INFO] 10.244.2.2:51776 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000069559s
	[INFO] 10.244.2.2:34317 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000136009s
	[INFO] 10.244.2.2:35638 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001211s
	[INFO] 10.244.2.2:51345 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075754s
	[INFO] 10.244.0.4:53603 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110008s
	[INFO] 10.244.0.4:48703 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000116941s
	[INFO] 10.244.1.2:60563 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000101753s
	[INFO] 10.244.1.2:40746 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000119902s
	[INFO] 10.244.2.2:38053 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000094376s
	[INFO] 10.244.2.2:51713 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000069296s
	[INFO] 10.244.0.4:32805 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00008605s
	[INFO] 10.244.0.4:44664 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000292333s
	[INFO] 10.244.1.2:33360 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000078243s
	[INFO] 10.244.2.2:36409 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159318s
	[INFO] 10.244.2.2:36868 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000094303s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [6e7030dd2319] <==
	[INFO] 10.244.0.4:56870 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000085932s
	[INFO] 10.244.0.4:42671 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000180223s
	[INFO] 10.244.1.2:48098 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102353s
	[INFO] 10.244.1.2:56626 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.00009538s
	[INFO] 10.244.1.2:45195 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000135305s
	[INFO] 10.244.1.2:57387 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000073744s
	[INFO] 10.244.1.2:56567 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000045328s
	[INFO] 10.244.2.2:40253 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000077683s
	[INFO] 10.244.2.2:49008 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000110327s
	[INFO] 10.244.2.2:54182 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000061031s
	[INFO] 10.244.0.4:53519 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000087904s
	[INFO] 10.244.0.4:37380 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000132535s
	[INFO] 10.244.1.2:33397 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000128623s
	[INFO] 10.244.1.2:35879 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00014214s
	[INFO] 10.244.2.2:39230 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133513s
	[INFO] 10.244.2.2:47654 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000054424s
	[INFO] 10.244.0.4:59796 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00007443s
	[INFO] 10.244.0.4:49766 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000103812s
	[INFO] 10.244.1.2:36226 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102458s
	[INFO] 10.244.1.2:35698 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00010282s
	[INFO] 10.244.1.2:40757 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000066548s
	[INFO] 10.244.2.2:44488 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000148719s
	[INFO] 10.244.2.2:40024 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000069743s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [bd209bf19cc9] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:43213 - 10525 "HINFO IN 4125844120146388069.4027558012888257277. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.0104908s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1432599962]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Sep-2024 00:54:01.650) (total time: 30002ms):
	Trace[1432599962]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (00:54:31.653)
	Trace[1432599962]: [30.002427557s] [30.002427557s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[417897734]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Sep-2024 00:54:01.652) (total time: 30002ms):
	Trace[417897734]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (00:54:31.654)
	Trace[417897734]: [30.002368442s] [30.002368442s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1861937109]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Sep-2024 00:54:01.653) (total time: 30001ms):
	Trace[1861937109]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (00:54:31.654)
	Trace[1861937109]: [30.001494446s] [30.001494446s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [fa6222acd131] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:35702 - 33029 "HINFO IN 8241224091513256990.6666502665085127686. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009680676s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1899858293]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Sep-2024 00:54:01.665) (total time: 30001ms):
	Trace[1899858293]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (00:54:31.666)
	Trace[1899858293]: [30.001480741s] [30.001480741s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1985679635]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Sep-2024 00:54:01.669) (total time: 30000ms):
	Trace[1985679635]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (00:54:31.669)
	Trace[1985679635]: [30.000934597s] [30.000934597s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[345146888]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Sep-2024 00:54:01.669) (total time: 30003ms):
	Trace[345146888]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (00:54:31.673)
	Trace[345146888]: [30.003771613s] [30.003771613s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-476000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-476000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=ha-476000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_26T17_42_35_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 00:42:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-476000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 01:01:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 00:59:03 +0000   Fri, 27 Sep 2024 00:42:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 00:59:03 +0000   Fri, 27 Sep 2024 00:42:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 00:59:03 +0000   Fri, 27 Sep 2024 00:42:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 00:59:03 +0000   Fri, 27 Sep 2024 00:42:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-476000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 24c18e25f36040298bb96a7a31469c55
	  System UUID:                99cf4d4f-0000-0000-a72a-447af4e3b1db
	  Boot ID:                    8cf1f24c-8c01-4381-8f8f-6e75f77e6648
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-bvjrf              0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 coredns-7c65d6cfc9-44l9n             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     19m
	  kube-system                 coredns-7c65d6cfc9-7jwgv             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     19m
	  kube-system                 etcd-ha-476000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         19m
	  kube-system                 kindnet-lgj66                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      19m
	  kube-system                 kube-apiserver-ha-476000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-ha-476000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-nrsx7                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-ha-476000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-vip-ha-476000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m44s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m41s                  kube-proxy       
	  Normal  Starting                 19m                    kube-proxy       
	  Normal  Starting                 19m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)      kubelet          Node ha-476000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  19m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)      kubelet          Node ha-476000 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)      kubelet          Node ha-476000 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  19m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     19m                    kubelet          Node ha-476000 status is now: NodeHasSufficientPID
	  Normal  Starting                 19m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m                    kubelet          Node ha-476000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m                    kubelet          Node ha-476000 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           19m                    node-controller  Node ha-476000 event: Registered Node ha-476000 in Controller
	  Normal  NodeReady                18m                    kubelet          Node ha-476000 status is now: NodeReady
	  Normal  RegisteredNode           18m                    node-controller  Node ha-476000 event: Registered Node ha-476000 in Controller
	  Normal  RegisteredNode           16m                    node-controller  Node ha-476000 event: Registered Node ha-476000 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-476000 event: Registered Node ha-476000 in Controller
	  Normal  Starting                 8m25s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m25s (x8 over 8m25s)  kubelet          Node ha-476000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m25s (x8 over 8m25s)  kubelet          Node ha-476000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m25s (x7 over 8m25s)  kubelet          Node ha-476000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m52s                  node-controller  Node ha-476000 event: Registered Node ha-476000 in Controller
	  Normal  RegisteredNode           7m38s                  node-controller  Node ha-476000 event: Registered Node ha-476000 in Controller
	
	
	Name:               ha-476000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-476000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=ha-476000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_26T17_43_38_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 00:43:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-476000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 01:01:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 00:59:06 +0000   Fri, 27 Sep 2024 00:43:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 00:59:06 +0000   Fri, 27 Sep 2024 00:43:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 00:59:06 +0000   Fri, 27 Sep 2024 00:43:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 00:59:06 +0000   Fri, 27 Sep 2024 00:54:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-476000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 35bc971223ac4e939cad535ac89bc725
	  System UUID:                58f4445b-0000-0000-bae0-ab27a7b8106e
	  Boot ID:                    7dcb1bbe-ca7a-45f1-9dd9-dc673285b7e4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-gvp8q                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 etcd-ha-476000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         18m
	  kube-system                 kindnet-hhrtc                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      18m
	  kube-system                 kube-apiserver-ha-476000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-controller-manager-ha-476000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-proxy-ctdh4                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-scheduler-ha-476000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-vip-ha-476000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 18m                  kube-proxy       
	  Normal   Starting                 7m24s                kube-proxy       
	  Normal   Starting                 14m                  kube-proxy       
	  Normal   NodeHasSufficientMemory  18m (x8 over 18m)    kubelet          Node ha-476000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18m (x8 over 18m)    kubelet          Node ha-476000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18m (x7 over 18m)    kubelet          Node ha-476000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  18m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           18m                  node-controller  Node ha-476000-m02 event: Registered Node ha-476000-m02 in Controller
	  Normal   RegisteredNode           18m                  node-controller  Node ha-476000-m02 event: Registered Node ha-476000-m02 in Controller
	  Normal   RegisteredNode           16m                  node-controller  Node ha-476000-m02 event: Registered Node ha-476000-m02 in Controller
	  Normal   NodeAllocatableEnforced  14m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 14m                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  14m                  kubelet          Node ha-476000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m                  kubelet          Node ha-476000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m                  kubelet          Node ha-476000-m02 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 14m                  kubelet          Node ha-476000-m02 has been rebooted, boot id: 993826c6-3fde-4076-a7cb-33cc19f1b1bc
	  Normal   RegisteredNode           14m                  node-controller  Node ha-476000-m02 event: Registered Node ha-476000-m02 in Controller
	  Normal   NodeHasNoDiskPressure    8m4s (x8 over 8m4s)  kubelet          Node ha-476000-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 8m4s                 kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  8m4s (x8 over 8m4s)  kubelet          Node ha-476000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     8m4s (x7 over 8m4s)  kubelet          Node ha-476000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  8m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           7m52s                node-controller  Node ha-476000-m02 event: Registered Node ha-476000-m02 in Controller
	  Normal   RegisteredNode           7m38s                node-controller  Node ha-476000-m02 event: Registered Node ha-476000-m02 in Controller
	
	
	Name:               ha-476000-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-476000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=ha-476000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_26T17_44_54_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 00:44:51 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-476000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 00:47:14 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 27 Sep 2024 00:45:53 +0000   Fri, 27 Sep 2024 00:54:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 27 Sep 2024 00:45:53 +0000   Fri, 27 Sep 2024 00:54:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 27 Sep 2024 00:45:53 +0000   Fri, 27 Sep 2024 00:54:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 27 Sep 2024 00:45:53 +0000   Fri, 27 Sep 2024 00:54:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.7
	  Hostname:    ha-476000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 365f6a31a3d140dba5c1be3b08da7ad2
	  System UUID:                91a54c64-0000-0000-acd8-a07fa14dbb0d
	  Boot ID:                    4ca02f6d-4375-4909-8877-3e005809b499
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-jgndj                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 etcd-ha-476000-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kindnet-4pnxr                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	  kube-system                 kube-apiserver-ha-476000-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-ha-476000-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-bpsqv                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-ha-476000-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-vip-ha-476000-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node ha-476000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node ha-476000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node ha-476000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16m                node-controller  Node ha-476000-m03 event: Registered Node ha-476000-m03 in Controller
	  Normal  RegisteredNode           16m                node-controller  Node ha-476000-m03 event: Registered Node ha-476000-m03 in Controller
	  Normal  RegisteredNode           16m                node-controller  Node ha-476000-m03 event: Registered Node ha-476000-m03 in Controller
	  Normal  RegisteredNode           14m                node-controller  Node ha-476000-m03 event: Registered Node ha-476000-m03 in Controller
	  Normal  RegisteredNode           7m52s              node-controller  Node ha-476000-m03 event: Registered Node ha-476000-m03 in Controller
	  Normal  RegisteredNode           7m38s              node-controller  Node ha-476000-m03 event: Registered Node ha-476000-m03 in Controller
	  Normal  NodeNotReady             7m12s              node-controller  Node ha-476000-m03 status is now: NodeNotReady
	
	
	Name:               ha-476000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-476000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=ha-476000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_26T17_45_52_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 00:45:52 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-476000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 00:47:13 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 27 Sep 2024 00:46:22 +0000   Fri, 27 Sep 2024 00:54:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 27 Sep 2024 00:46:22 +0000   Fri, 27 Sep 2024 00:54:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 27 Sep 2024 00:46:22 +0000   Fri, 27 Sep 2024 00:54:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 27 Sep 2024 00:46:22 +0000   Fri, 27 Sep 2024 00:54:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-476000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 7bdc03e4e33a47a0a7d85ecb664669d4
	  System UUID:                dcce4501-0000-0000-a378-25a085ede049
	  Boot ID:                    b0d71ae5-8550-430a-94b7-e146e65fc279
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-44vxl       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	  kube-system                 kube-proxy-5d8nb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  NodeHasSufficientPID     15m (x2 over 15m)  kubelet          Node ha-476000-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x2 over 15m)  kubelet          Node ha-476000-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x2 over 15m)  kubelet          Node ha-476000-m04 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           15m                node-controller  Node ha-476000-m04 event: Registered Node ha-476000-m04 in Controller
	  Normal  RegisteredNode           15m                node-controller  Node ha-476000-m04 event: Registered Node ha-476000-m04 in Controller
	  Normal  RegisteredNode           15m                node-controller  Node ha-476000-m04 event: Registered Node ha-476000-m04 in Controller
	  Normal  NodeReady                15m                kubelet          Node ha-476000-m04 status is now: NodeReady
	  Normal  RegisteredNode           14m                node-controller  Node ha-476000-m04 event: Registered Node ha-476000-m04 in Controller
	  Normal  RegisteredNode           7m52s              node-controller  Node ha-476000-m04 event: Registered Node ha-476000-m04 in Controller
	  Normal  RegisteredNode           7m38s              node-controller  Node ha-476000-m04 event: Registered Node ha-476000-m04 in Controller
	  Normal  NodeNotReady             7m12s              node-controller  Node ha-476000-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.036532] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.006931] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.697129] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000003] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006778] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.775372] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.244387] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.695216] systemd-fstab-generator[461]: Ignoring "noauto" option for root device
	[  +0.101404] systemd-fstab-generator[473]: Ignoring "noauto" option for root device
	[  +1.958371] systemd-fstab-generator[1093]: Ignoring "noauto" option for root device
	[  +0.251045] systemd-fstab-generator[1131]: Ignoring "noauto" option for root device
	[  +0.050021] kauditd_printk_skb: 101 callbacks suppressed
	[  +0.047173] systemd-fstab-generator[1143]: Ignoring "noauto" option for root device
	[  +0.112931] systemd-fstab-generator[1157]: Ignoring "noauto" option for root device
	[  +2.468376] systemd-fstab-generator[1375]: Ignoring "noauto" option for root device
	[  +0.117710] systemd-fstab-generator[1387]: Ignoring "noauto" option for root device
	[  +0.113441] systemd-fstab-generator[1399]: Ignoring "noauto" option for root device
	[  +0.129593] systemd-fstab-generator[1414]: Ignoring "noauto" option for root device
	[  +0.427728] systemd-fstab-generator[1574]: Ignoring "noauto" option for root device
	[  +6.920294] kauditd_printk_skb: 212 callbacks suppressed
	[ +21.597968] kauditd_printk_skb: 40 callbacks suppressed
	[Sep27 00:54] kauditd_printk_skb: 94 callbacks suppressed
	
	
	==> etcd [89ad0e203b82] <==
	{"level":"warn","ts":"2024-09-27T01:00:41.604432Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T01:00:46.604611Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T01:00:46.604787Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T01:00:51.605567Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T01:00:51.605673Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T01:00:56.606564Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T01:00:56.606617Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T01:01:01.606696Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T01:01:01.606844Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T01:01:06.607662Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T01:01:06.607748Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T01:01:11.608840Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T01:01:11.608806Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T01:01:16.609867Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T01:01:16.610015Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T01:01:21.611095Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T01:01:21.611167Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T01:01:26.611861Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T01:01:26.611915Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T01:01:31.613088Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T01:01:31.613058Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T01:01:36.613353Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T01:01:36.613417Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T01:01:41.613941Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T01:01:41.613954Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	
	
	==> etcd [c08f45a78a8e] <==
	{"level":"warn","ts":"2024-09-27T00:47:41.542035Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-27T00:47:33.744957Z","time spent":"7.797074842s","remote":"127.0.0.1:40790","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":0,"response size":0,"request content":"key:\"/registry/networkpolicies/\" range_end:\"/registry/networkpolicies0\" count_only:true "}
	2024/09/27 00:47:41 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-27T00:47:41.542079Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"299.225057ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-09-27T00:47:41.542107Z","caller":"traceutil/trace.go:171","msg":"trace[2123825160] range","detail":"{range_begin:/registry/health; range_end:; }","duration":"299.252922ms","start":"2024-09-27T00:47:41.242851Z","end":"2024-09-27T00:47:41.542104Z","steps":["trace[2123825160] 'agreement among raft nodes before linearized reading'  (duration: 299.224906ms)"],"step_count":1}
	2024/09/27 00:47:41 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-27T00:47:41.593990Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-27T00:47:41.594018Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-27T00:47:41.602616Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-27T00:47:41.604582Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"c11bfa8a53277726"}
	{"level":"info","ts":"2024-09-27T00:47:41.604604Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"c11bfa8a53277726"}
	{"level":"info","ts":"2024-09-27T00:47:41.604619Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"c11bfa8a53277726"}
	{"level":"info","ts":"2024-09-27T00:47:41.604716Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c11bfa8a53277726"}
	{"level":"info","ts":"2024-09-27T00:47:41.604762Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c11bfa8a53277726"}
	{"level":"info","ts":"2024-09-27T00:47:41.604790Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c11bfa8a53277726"}
	{"level":"info","ts":"2024-09-27T00:47:41.604798Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"c11bfa8a53277726"}
	{"level":"info","ts":"2024-09-27T00:47:41.604802Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"a5e8f6083b0b81f2"}
	{"level":"info","ts":"2024-09-27T00:47:41.604809Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"a5e8f6083b0b81f2"}
	{"level":"info","ts":"2024-09-27T00:47:41.604819Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"a5e8f6083b0b81f2"}
	{"level":"info","ts":"2024-09-27T00:47:41.605484Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"a5e8f6083b0b81f2"}
	{"level":"info","ts":"2024-09-27T00:47:41.605507Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"a5e8f6083b0b81f2"}
	{"level":"info","ts":"2024-09-27T00:47:41.605546Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"a5e8f6083b0b81f2"}
	{"level":"info","ts":"2024-09-27T00:47:41.605556Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"a5e8f6083b0b81f2"}
	{"level":"info","ts":"2024-09-27T00:47:41.607550Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-09-27T00:47:41.607595Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-09-27T00:47:41.607615Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-476000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> kernel <==
	 01:01:44 up 8 min,  0 users,  load average: 0.31, 0.33, 0.20
	Linux ha-476000 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [13b4ae2edced] <==
	I0927 01:01:12.491073       1 main.go:299] handling current node
	I0927 01:01:22.489963       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0927 01:01:22.490013       1 main.go:299] handling current node
	I0927 01:01:22.490027       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0927 01:01:22.490033       1 main.go:322] Node ha-476000-m02 has CIDR [10.244.1.0/24] 
	I0927 01:01:22.490309       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0927 01:01:22.490350       1 main.go:322] Node ha-476000-m03 has CIDR [10.244.2.0/24] 
	I0927 01:01:22.490406       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0927 01:01:22.490538       1 main.go:322] Node ha-476000-m04 has CIDR [10.244.3.0/24] 
	I0927 01:01:32.489725       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0927 01:01:32.489842       1 main.go:299] handling current node
	I0927 01:01:32.490043       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0927 01:01:32.490178       1 main.go:322] Node ha-476000-m02 has CIDR [10.244.1.0/24] 
	I0927 01:01:32.490485       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0927 01:01:32.490613       1 main.go:322] Node ha-476000-m03 has CIDR [10.244.2.0/24] 
	I0927 01:01:32.490780       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0927 01:01:32.490865       1 main.go:322] Node ha-476000-m04 has CIDR [10.244.3.0/24] 
	I0927 01:01:42.491359       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0927 01:01:42.491399       1 main.go:299] handling current node
	I0927 01:01:42.491410       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0927 01:01:42.491415       1 main.go:322] Node ha-476000-m02 has CIDR [10.244.1.0/24] 
	I0927 01:01:42.491596       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0927 01:01:42.491623       1 main.go:322] Node ha-476000-m03 has CIDR [10.244.2.0/24] 
	I0927 01:01:42.491779       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0927 01:01:42.491805       1 main.go:322] Node ha-476000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [730d4ab163e7] <==
	I0927 00:47:03.705461       1 main.go:322] Node ha-476000-m04 has CIDR [10.244.3.0/24] 
	I0927 00:47:13.713791       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0927 00:47:13.713985       1 main.go:299] handling current node
	I0927 00:47:13.714102       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0927 00:47:13.714214       1 main.go:322] Node ha-476000-m02 has CIDR [10.244.1.0/24] 
	I0927 00:47:13.714414       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0927 00:47:13.714545       1 main.go:322] Node ha-476000-m03 has CIDR [10.244.2.0/24] 
	I0927 00:47:13.714946       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0927 00:47:13.715065       1 main.go:322] Node ha-476000-m04 has CIDR [10.244.3.0/24] 
	I0927 00:47:23.710748       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0927 00:47:23.710778       1 main.go:322] Node ha-476000-m02 has CIDR [10.244.1.0/24] 
	I0927 00:47:23.710966       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0927 00:47:23.711202       1 main.go:322] Node ha-476000-m03 has CIDR [10.244.2.0/24] 
	I0927 00:47:23.711295       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0927 00:47:23.711303       1 main.go:322] Node ha-476000-m04 has CIDR [10.244.3.0/24] 
	I0927 00:47:23.711508       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0927 00:47:23.711595       1 main.go:299] handling current node
	I0927 00:47:33.704824       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0927 00:47:33.704897       1 main.go:322] Node ha-476000-m02 has CIDR [10.244.1.0/24] 
	I0927 00:47:33.705242       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0927 00:47:33.705307       1 main.go:322] Node ha-476000-m03 has CIDR [10.244.2.0/24] 
	I0927 00:47:33.705486       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0927 00:47:33.705818       1 main.go:322] Node ha-476000-m04 has CIDR [10.244.3.0/24] 
	I0927 00:47:33.705995       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0927 00:47:33.706008       1 main.go:299] handling current node
	
	
	==> kube-apiserver [87e465b7b95f] <==
	I0927 00:54:02.884947       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0927 00:54:02.884955       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0927 00:54:02.943365       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0927 00:54:02.943570       1 policy_source.go:224] refreshing policies
	I0927 00:54:02.949648       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0927 00:54:02.975777       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0927 00:54:02.975897       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0927 00:54:02.975835       1 shared_informer.go:320] Caches are synced for configmaps
	I0927 00:54:02.976591       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0927 00:54:02.977323       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0927 00:54:02.977419       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0927 00:54:02.977565       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0927 00:54:02.982008       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0927 00:54:02.982182       1 shared_informer.go:320] Caches are synced for node_authorizer
	W0927 00:54:02.987432       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.6]
	I0927 00:54:02.987619       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0927 00:54:02.987707       1 aggregator.go:171] initial CRD sync complete...
	I0927 00:54:02.987750       1 autoregister_controller.go:144] Starting autoregister controller
	I0927 00:54:02.987857       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0927 00:54:02.987898       1 cache.go:39] Caches are synced for autoregister controller
	I0927 00:54:02.988709       1 controller.go:615] quota admission added evaluator for: endpoints
	I0927 00:54:02.993982       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0927 00:54:02.997126       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0927 00:54:03.884450       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0927 00:54:04.211694       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5 192.169.0.6]
	
	
	==> kube-apiserver [d6683f474676] <==
	I0927 00:53:26.693239       1 options.go:228] external host was not specified, using 192.169.0.5
	I0927 00:53:26.695952       1 server.go:142] Version: v1.31.1
	I0927 00:53:26.696173       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 00:53:27.299904       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0927 00:53:27.320033       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0927 00:53:27.330041       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0927 00:53:27.330098       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0927 00:53:27.332141       1 instance.go:232] Using reconciler: lease
	W0927 00:53:47.293920       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0927 00:53:47.294149       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0927 00:53:47.333433       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [01c5e9b4fab0] <==
	I0927 00:54:07.185942       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.202µs"
	I0927 00:54:09.276645       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="61.828631ms"
	I0927 00:54:09.276726       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="27.067µs"
	I0927 00:54:32.998333       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-476000-m04"
	I0927 00:54:32.998470       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-476000-m03"
	I0927 00:54:33.020582       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-476000-m03"
	I0927 00:54:33.020882       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-476000-m04"
	I0927 00:54:33.070337       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="7.029804ms"
	I0927 00:54:33.070565       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="96.493µs"
	I0927 00:54:36.474604       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-476000-m03"
	I0927 00:54:38.190557       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-476000-m03"
	I0927 00:54:40.584626       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-h7qwt EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-h7qwt\": the object has been modified; please apply your changes to the latest version and try again"
	I0927 00:54:40.585022       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"3537638a-d8ae-4b35-b930-21aeb412efa9", APIVersion:"v1", ResourceVersion:"270", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-h7qwt EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-h7qwt": the object has been modified; please apply your changes to the latest version and try again
	I0927 00:54:40.589666       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="48.410037ms"
	I0927 00:54:40.614904       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="25.040724ms"
	I0927 00:54:40.615187       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="45.324µs"
	I0927 00:54:46.573579       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-476000-m04"
	I0927 00:54:48.277366       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-476000-m04"
	I0927 00:59:03.699041       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-476000"
	I0927 00:59:06.173964       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-476000-m02"
	I0927 00:59:36.474889       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7dff88458-jgndj"
	I0927 00:59:36.494985       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="35.479µs"
	I0927 00:59:36.562600       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="59.647863ms"
	I0927 00:59:36.603961       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="41.297548ms"
	I0927 00:59:36.604297       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="195.359µs"
	
	
	==> kube-controller-manager [06a5f950d0b2] <==
	I0927 00:53:27.325939       1 serving.go:386] Generated self-signed cert in-memory
	I0927 00:53:28.243164       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0927 00:53:28.243279       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 00:53:28.245422       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0927 00:53:28.245777       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0927 00:53:28.245999       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0927 00:53:28.246030       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0927 00:53:48.339070       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.169.0.5:8443/healthz\": dial tcp 192.169.0.5:8443: connect: connection refused"
	
	
	==> kube-proxy [2d1ef1d1af27] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0927 00:42:39.294950       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0927 00:42:39.305827       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0927 00:42:39.314387       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 00:42:39.360026       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0927 00:42:39.360068       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0927 00:42:39.360085       1 server_linux.go:169] "Using iptables Proxier"
	I0927 00:42:39.362140       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 00:42:39.362382       1 server.go:483] "Version info" version="v1.31.1"
	I0927 00:42:39.362411       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 00:42:39.365397       1 config.go:199] "Starting service config controller"
	I0927 00:42:39.365470       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 00:42:39.365636       1 config.go:105] "Starting endpoint slice config controller"
	I0927 00:42:39.365692       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 00:42:39.366725       1 config.go:328] "Starting node config controller"
	I0927 00:42:39.366799       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 00:42:39.466084       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0927 00:42:39.466107       1 shared_informer.go:320] Caches are synced for service config
	I0927 00:42:39.468057       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [3ab08f3aed77] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0927 00:54:02.572463       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0927 00:54:02.595215       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0927 00:54:02.595477       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 00:54:02.710300       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0927 00:54:02.710322       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0927 00:54:02.710339       1 server_linux.go:169] "Using iptables Proxier"
	I0927 00:54:02.714167       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 00:54:02.715628       1 server.go:483] "Version info" version="v1.31.1"
	I0927 00:54:02.715707       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 00:54:02.718471       1 config.go:199] "Starting service config controller"
	I0927 00:54:02.719333       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 00:54:02.719741       1 config.go:105] "Starting endpoint slice config controller"
	I0927 00:54:02.719810       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 00:54:02.721272       1 config.go:328] "Starting node config controller"
	I0927 00:54:02.721390       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 00:54:02.820358       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0927 00:54:02.820547       1 shared_informer.go:320] Caches are synced for service config
	I0927 00:54:02.824323       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8b01a83a0b09] <==
	E0927 00:45:52.380874       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-mm66p\": pod kube-proxy-mm66p is already assigned to node \"ha-476000-m04\"" pod="kube-system/kube-proxy-mm66p"
	E0927 00:45:52.381463       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-44vxl\": pod kindnet-44vxl is already assigned to node \"ha-476000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-44vxl" node="ha-476000-m04"
	E0927 00:45:52.381533       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 488a3806-d7c1-4397-bff8-00d9ea3cb984(kube-system/kindnet-44vxl) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-44vxl"
	E0927 00:45:52.381617       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-44vxl\": pod kindnet-44vxl is already assigned to node \"ha-476000-m04\"" pod="kube-system/kindnet-44vxl"
	I0927 00:45:52.381654       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-44vxl" node="ha-476000-m04"
	E0927 00:45:52.382881       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-gtnxm\": pod kindnet-gtnxm is already assigned to node \"ha-476000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-gtnxm" node="ha-476000-m04"
	E0927 00:45:52.383371       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod c96b1801-d5cd-47bc-8555-43224fd5668c(kube-system/kindnet-gtnxm) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-gtnxm"
	E0927 00:45:52.383419       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-gtnxm\": pod kindnet-gtnxm is already assigned to node \"ha-476000-m04\"" pod="kube-system/kindnet-gtnxm"
	I0927 00:45:52.383438       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-gtnxm" node="ha-476000-m04"
	E0927 00:45:52.385915       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-5d8nb\": pod kube-proxy-5d8nb is already assigned to node \"ha-476000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-5d8nb" node="ha-476000-m04"
	E0927 00:45:52.386403       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 1e9c0178-2d58-4ca1-9fbe-c8e54d91bf1a(kube-system/kube-proxy-5d8nb) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-5d8nb"
	E0927 00:45:52.388489       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-5d8nb\": pod kube-proxy-5d8nb is already assigned to node \"ha-476000-m04\"" pod="kube-system/kube-proxy-5d8nb"
	I0927 00:45:52.388818       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-5d8nb" node="ha-476000-m04"
	E0927 00:45:52.414440       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-p2r4t\": pod kindnet-p2r4t is already assigned to node \"ha-476000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-p2r4t" node="ha-476000-m04"
	E0927 00:45:52.414491       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e7daae81-cf6d-498e-9458-8613a0c1f174(kube-system/kindnet-p2r4t) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-p2r4t"
	E0927 00:45:52.414504       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-p2r4t\": pod kindnet-p2r4t is already assigned to node \"ha-476000-m04\"" pod="kube-system/kindnet-p2r4t"
	I0927 00:45:52.414830       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-p2r4t" node="ha-476000-m04"
	E0927 00:45:52.434469       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-f2tbl\": pod kube-proxy-f2tbl is already assigned to node \"ha-476000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-f2tbl" node="ha-476000-m04"
	E0927 00:45:52.434547       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod ce1fa3d7-adbb-4d4d-bd23-a1e60ee54d5b(kube-system/kube-proxy-f2tbl) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-f2tbl"
	E0927 00:45:52.434998       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-f2tbl\": pod kube-proxy-f2tbl is already assigned to node \"ha-476000-m04\"" pod="kube-system/kube-proxy-f2tbl"
	I0927 00:45:52.435043       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-f2tbl" node="ha-476000-m04"
	I0927 00:47:41.631073       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0927 00:47:41.633242       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0927 00:47:41.634639       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0927 00:47:41.635978       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e923cc80604d] <==
	W0927 00:53:55.890712       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0927 00:53:55.890825       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:53:55.916618       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0927 00:53:55.916669       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:53:56.112443       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0927 00:53:56.112541       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:53:56.325586       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0927 00:53:56.325680       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:53:56.333523       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0927 00:53:56.333592       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:53:57.242866       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0927 00:53:57.243040       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:53:57.398430       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0927 00:53:57.398522       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:53:57.562966       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0927 00:53:57.563196       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:53:58.300576       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0927 00:53:58.300855       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:53:58.356734       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0927 00:53:58.356802       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:54:02.892809       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0927 00:54:02.892856       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 00:54:02.893077       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0927 00:54:02.893208       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0927 00:54:02.956308       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 27 00:57:19 ha-476000 kubelet[1581]: E0927 00:57:19.247466    1581 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 27 00:57:19 ha-476000 kubelet[1581]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 27 00:57:19 ha-476000 kubelet[1581]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 27 00:57:19 ha-476000 kubelet[1581]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 00:57:19 ha-476000 kubelet[1581]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 27 00:58:19 ha-476000 kubelet[1581]: E0927 00:58:19.248304    1581 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 27 00:58:19 ha-476000 kubelet[1581]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 27 00:58:19 ha-476000 kubelet[1581]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 27 00:58:19 ha-476000 kubelet[1581]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 00:58:19 ha-476000 kubelet[1581]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 27 00:59:19 ha-476000 kubelet[1581]: E0927 00:59:19.247941    1581 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 27 00:59:19 ha-476000 kubelet[1581]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 27 00:59:19 ha-476000 kubelet[1581]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 27 00:59:19 ha-476000 kubelet[1581]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 00:59:19 ha-476000 kubelet[1581]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 27 01:00:19 ha-476000 kubelet[1581]: E0927 01:00:19.248217    1581 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 27 01:00:19 ha-476000 kubelet[1581]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 27 01:00:19 ha-476000 kubelet[1581]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 27 01:00:19 ha-476000 kubelet[1581]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 01:00:19 ha-476000 kubelet[1581]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 27 01:01:19 ha-476000 kubelet[1581]: E0927 01:01:19.248364    1581 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 27 01:01:19 ha-476000 kubelet[1581]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 27 01:01:19 ha-476000 kubelet[1581]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 27 01:01:19 ha-476000 kubelet[1581]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 01:01:19 ha-476000 kubelet[1581]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-476000 -n ha-476000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-476000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7dff88458-qwrlx
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-476000 describe pod busybox-7dff88458-qwrlx
helpers_test.go:282: (dbg) kubectl --context ha-476000 describe pod busybox-7dff88458-qwrlx:

                                                
                                                
-- stdout --
	Name:             busybox-7dff88458-qwrlx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7dff88458
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7dff88458
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lg2sq (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-lg2sq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age    From               Message
	  ----     ------            ----   ----               -------
	  Warning  FailedScheduling  2m10s  default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  2m10s  default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (302.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (4.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:304: expected profile "ha-476000" in json of 'profile list' to include 4 nodes but have 5 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-476000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-476000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServ
erPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-476000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\"KubernetesVersion
\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.169.0.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.169.0.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true},{\"Name\":\"m05\",\"IP\":\"192.169.0.9\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,
\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Moun
t\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
ha_test.go:307: expected profile "ha-476000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-476000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-476000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-476000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\"Kub
ernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.169.0.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.169.0.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true},{\"Name\":\"m05\",\"IP\":\"192.169.0.9\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingres
s-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":9460800000
0000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-476000 -n ha-476000
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-476000 logs -n 25: (3.605783406s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-476000 ssh -n                                                                                                             | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | ha-476000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-476000 ssh -n ha-476000-m04 sudo cat                                                                                      | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | /home/docker/cp-test_ha-476000-m03_ha-476000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-476000 cp testdata/cp-test.txt                                                                                            | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | ha-476000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-476000 ssh -n                                                                                                             | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | ha-476000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-476000 cp ha-476000-m04:/home/docker/cp-test.txt                                                                          | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile3898402723/001/cp-test_ha-476000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-476000 ssh -n                                                                                                             | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | ha-476000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-476000 cp ha-476000-m04:/home/docker/cp-test.txt                                                                          | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | ha-476000:/home/docker/cp-test_ha-476000-m04_ha-476000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-476000 ssh -n                                                                                                             | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | ha-476000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-476000 ssh -n ha-476000 sudo cat                                                                                          | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | /home/docker/cp-test_ha-476000-m04_ha-476000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-476000 cp ha-476000-m04:/home/docker/cp-test.txt                                                                          | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | ha-476000-m02:/home/docker/cp-test_ha-476000-m04_ha-476000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-476000 ssh -n                                                                                                             | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | ha-476000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-476000 ssh -n ha-476000-m02 sudo cat                                                                                      | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | /home/docker/cp-test_ha-476000-m04_ha-476000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-476000 cp ha-476000-m04:/home/docker/cp-test.txt                                                                          | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | ha-476000-m03:/home/docker/cp-test_ha-476000-m04_ha-476000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-476000 ssh -n                                                                                                             | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | ha-476000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-476000 ssh -n ha-476000-m03 sudo cat                                                                                      | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | /home/docker/cp-test_ha-476000-m04_ha-476000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-476000 node stop m02 -v=7                                                                                                 | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:46 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-476000 node start m02 -v=7                                                                                                | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:46 PDT | 26 Sep 24 17:47 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-476000 -v=7                                                                                                       | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:47 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-476000 -v=7                                                                                                            | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:47 PDT | 26 Sep 24 17:47 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-476000 --wait=true -v=7                                                                                                | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:47 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-476000                                                                                                            | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:49 PDT |                     |
	| node    | ha-476000 node delete m03 -v=7                                                                                               | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:49 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | ha-476000 stop -v=7                                                                                                          | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:49 PDT | 26 Sep 24 17:53 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-476000 --wait=true                                                                                                     | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:53 PDT |                     |
	|         | -v=7 --alsologtostderr                                                                                                       |           |         |         |                     |                     |
	|         | --driver=hyperkit                                                                                                            |           |         |         |                     |                     |
	| node    | add -p ha-476000                                                                                                             | ha-476000 | jenkins | v1.34.0 | 26 Sep 24 17:56 PDT |                     |
	|         | --control-plane -v=7                                                                                                         |           |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/26 17:53:00
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 17:53:00.467998    4178 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:53:00.468247    4178 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:53:00.468252    4178 out.go:358] Setting ErrFile to fd 2...
	I0926 17:53:00.468256    4178 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:53:00.468436    4178 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1128/.minikube/bin
	I0926 17:53:00.469901    4178 out.go:352] Setting JSON to false
	I0926 17:53:00.492370    4178 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3150,"bootTime":1727395230,"procs":437,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0926 17:53:00.492530    4178 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 17:53:00.514400    4178 out.go:177] * [ha-476000] minikube v1.34.0 on Darwin 14.6.1
	I0926 17:53:00.557228    4178 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 17:53:00.557300    4178 notify.go:220] Checking for updates...
	I0926 17:53:00.599719    4178 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1128/kubeconfig
	I0926 17:53:00.621009    4178 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0926 17:53:00.642091    4178 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 17:53:00.662936    4178 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1128/.minikube
	I0926 17:53:00.684204    4178 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 17:53:00.705550    4178 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:53:00.706120    4178 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:53:00.706169    4178 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:53:00.715431    4178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52037
	I0926 17:53:00.715807    4178 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:53:00.716207    4178 main.go:141] libmachine: Using API Version  1
	I0926 17:53:00.716243    4178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:53:00.716493    4178 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:53:00.716626    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:00.716833    4178 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 17:53:00.717101    4178 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:53:00.717132    4178 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:53:00.725380    4178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52039
	I0926 17:53:00.725706    4178 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:53:00.726059    4178 main.go:141] libmachine: Using API Version  1
	I0926 17:53:00.726076    4178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:53:00.726325    4178 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:53:00.726449    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:00.754773    4178 out.go:177] * Using the hyperkit driver based on existing profile
	I0926 17:53:00.797071    4178 start.go:297] selected driver: hyperkit
	I0926 17:53:00.797101    4178 start.go:901] validating driver "hyperkit" against &{Name:ha-476000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:ha-476000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclas
s:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 17:53:00.797347    4178 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 17:53:00.797543    4178 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 17:53:00.797758    4178 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19711-1128/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0926 17:53:00.807380    4178 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0926 17:53:00.811121    4178 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:53:00.811145    4178 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0926 17:53:00.813743    4178 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 17:53:00.813780    4178 cni.go:84] Creating CNI manager for ""
	I0926 17:53:00.813817    4178 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0926 17:53:00.813892    4178 start.go:340] cluster config:
	{Name:ha-476000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-476000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inacce
l:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 17:53:00.814010    4178 iso.go:125] acquiring lock: {Name:mka8a9c5a237c1e4ae233281d2ff7965d13a843d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 17:53:00.856015    4178 out.go:177] * Starting "ha-476000" primary control-plane node in "ha-476000" cluster
	I0926 17:53:00.877127    4178 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 17:53:00.877240    4178 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0926 17:53:00.877263    4178 cache.go:56] Caching tarball of preloaded images
	I0926 17:53:00.877457    4178 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0926 17:53:00.877476    4178 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 17:53:00.877658    4178 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/config.json ...
	I0926 17:53:00.878610    4178 start.go:360] acquireMachinesLock for ha-476000: {Name:mk62b0ec9b6170d1971239573141a625c4a2621e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 17:53:00.878759    4178 start.go:364] duration metric: took 97.008µs to acquireMachinesLock for "ha-476000"
	I0926 17:53:00.878828    4178 start.go:96] Skipping create...Using existing machine configuration
	I0926 17:53:00.878843    4178 fix.go:54] fixHost starting: 
	I0926 17:53:00.879324    4178 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:53:00.879362    4178 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:53:00.888435    4178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52041
	I0926 17:53:00.888799    4178 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:53:00.889164    4178 main.go:141] libmachine: Using API Version  1
	I0926 17:53:00.889177    4178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:53:00.889396    4178 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:53:00.889518    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:00.889616    4178 main.go:141] libmachine: (ha-476000) Calling .GetState
	I0926 17:53:00.889695    4178 main.go:141] libmachine: (ha-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:53:00.889775    4178 main.go:141] libmachine: (ha-476000) DBG | hyperkit pid from json: 4068
	I0926 17:53:00.890689    4178 main.go:141] libmachine: (ha-476000) DBG | hyperkit pid 4068 missing from process table
	I0926 17:53:00.890720    4178 fix.go:112] recreateIfNeeded on ha-476000: state=Stopped err=<nil>
	I0926 17:53:00.890735    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	W0926 17:53:00.890819    4178 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 17:53:00.933253    4178 out.go:177] * Restarting existing hyperkit VM for "ha-476000" ...
	I0926 17:53:00.956221    4178 main.go:141] libmachine: (ha-476000) Calling .Start
	I0926 17:53:00.956482    4178 main.go:141] libmachine: (ha-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:53:00.956522    4178 main.go:141] libmachine: (ha-476000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/hyperkit.pid
	I0926 17:53:00.958313    4178 main.go:141] libmachine: (ha-476000) DBG | hyperkit pid 4068 missing from process table
	I0926 17:53:00.958323    4178 main.go:141] libmachine: (ha-476000) DBG | pid 4068 is in state "Stopped"
	I0926 17:53:00.958337    4178 main.go:141] libmachine: (ha-476000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/hyperkit.pid...
	I0926 17:53:00.958705    4178 main.go:141] libmachine: (ha-476000) DBG | Using UUID 99cfbb80-3e9d-4d4f-a72a-447af4e3b1db
	I0926 17:53:01.067490    4178 main.go:141] libmachine: (ha-476000) DBG | Generated MAC 96:a2:4a:f3:be:4a
	I0926 17:53:01.067521    4178 main.go:141] libmachine: (ha-476000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000
	I0926 17:53:01.067590    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"99cfbb80-3e9d-4d4f-a72a-447af4e3b1db", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bea80)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0926 17:53:01.067614    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"99cfbb80-3e9d-4d4f-a72a-447af4e3b1db", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bea80)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0926 17:53:01.067680    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "99cfbb80-3e9d-4d4f-a72a-447af4e3b1db", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/ha-476000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/bzimage,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000"}
	I0926 17:53:01.067717    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 99cfbb80-3e9d-4d4f-a72a-447af4e3b1db -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/ha-476000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/console-ring -f kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/bzimage,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000"
	I0926 17:53:01.067731    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0926 17:53:01.069340    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 DEBUG: hyperkit: Pid is 4191
	I0926 17:53:01.069679    4178 main.go:141] libmachine: (ha-476000) DBG | Attempt 0
	I0926 17:53:01.069693    4178 main.go:141] libmachine: (ha-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:53:01.069753    4178 main.go:141] libmachine: (ha-476000) DBG | hyperkit pid from json: 4191
	I0926 17:53:01.071639    4178 main.go:141] libmachine: (ha-476000) DBG | Searching for 96:a2:4a:f3:be:4a in /var/db/dhcpd_leases ...
	I0926 17:53:01.071694    4178 main.go:141] libmachine: (ha-476000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0926 17:53:01.071711    4178 main.go:141] libmachine: (ha-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f7523f}
	I0926 17:53:01.071719    4178 main.go:141] libmachine: (ha-476000) DBG | Found match: 96:a2:4a:f3:be:4a
	I0926 17:53:01.071724    4178 main.go:141] libmachine: (ha-476000) DBG | IP: 192.169.0.5
	I0926 17:53:01.071801    4178 main.go:141] libmachine: (ha-476000) Calling .GetConfigRaw
	I0926 17:53:01.072466    4178 main.go:141] libmachine: (ha-476000) Calling .GetIP
	I0926 17:53:01.072682    4178 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/config.json ...
	I0926 17:53:01.073265    4178 machine.go:93] provisionDockerMachine start ...
	I0926 17:53:01.073276    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:01.073432    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:01.073553    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:01.073654    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:01.073744    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:01.073824    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:01.073962    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:01.074151    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0926 17:53:01.074160    4178 main.go:141] libmachine: About to run SSH command:
	hostname
	I0926 17:53:01.077803    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I0926 17:53:01.131821    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0926 17:53:01.132498    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 17:53:01.132519    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 17:53:01.132527    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 17:53:01.132535    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 17:53:01.515934    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0926 17:53:01.515948    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0926 17:53:01.630853    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 17:53:01.630870    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 17:53:01.630880    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 17:53:01.630889    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 17:53:01.631762    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0926 17:53:01.631773    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:01 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0926 17:53:07.224844    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:07 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0926 17:53:07.224979    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:07 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0926 17:53:07.224989    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:07 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0926 17:53:07.249067    4178 main.go:141] libmachine: (ha-476000) DBG | 2024/09/26 17:53:07 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0926 17:53:12.148094    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0926 17:53:12.148109    4178 main.go:141] libmachine: (ha-476000) Calling .GetMachineName
	I0926 17:53:12.148318    4178 buildroot.go:166] provisioning hostname "ha-476000"
	I0926 17:53:12.148328    4178 main.go:141] libmachine: (ha-476000) Calling .GetMachineName
	I0926 17:53:12.148430    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:12.148546    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:12.148649    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.148741    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.148844    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:12.148986    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:12.149192    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0926 17:53:12.149200    4178 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-476000 && echo "ha-476000" | sudo tee /etc/hostname
	I0926 17:53:12.225889    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-476000
	
	I0926 17:53:12.225907    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:12.226039    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:12.226125    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.226235    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.226313    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:12.226463    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:12.226601    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0926 17:53:12.226612    4178 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-476000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-476000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-476000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0926 17:53:12.298491    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 17:53:12.298512    4178 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19711-1128/.minikube CaCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19711-1128/.minikube}
	I0926 17:53:12.298531    4178 buildroot.go:174] setting up certificates
	I0926 17:53:12.298537    4178 provision.go:84] configureAuth start
	I0926 17:53:12.298544    4178 main.go:141] libmachine: (ha-476000) Calling .GetMachineName
	I0926 17:53:12.298672    4178 main.go:141] libmachine: (ha-476000) Calling .GetIP
	I0926 17:53:12.298777    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:12.298858    4178 provision.go:143] copyHostCerts
	I0926 17:53:12.298890    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem
	I0926 17:53:12.298959    4178 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem, removing ...
	I0926 17:53:12.298968    4178 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem
	I0926 17:53:12.299110    4178 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem (1082 bytes)
	I0926 17:53:12.299320    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem
	I0926 17:53:12.299359    4178 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem, removing ...
	I0926 17:53:12.299364    4178 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem
	I0926 17:53:12.299452    4178 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem (1123 bytes)
	I0926 17:53:12.299596    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem
	I0926 17:53:12.299633    4178 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem, removing ...
	I0926 17:53:12.299638    4178 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem
	I0926 17:53:12.299717    4178 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem (1675 bytes)
	I0926 17:53:12.299883    4178 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem org=jenkins.ha-476000 san=[127.0.0.1 192.169.0.5 ha-476000 localhost minikube]
	I0926 17:53:12.619231    4178 provision.go:177] copyRemoteCerts
	I0926 17:53:12.619306    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0926 17:53:12.619328    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:12.619499    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:12.619617    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.619721    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:12.619805    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/id_rsa Username:docker}
	I0926 17:53:12.659598    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0926 17:53:12.659672    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0926 17:53:12.679552    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0926 17:53:12.679620    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0926 17:53:12.699069    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0926 17:53:12.699141    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0926 17:53:12.718755    4178 provision.go:87] duration metric: took 420.20261ms to configureAuth
	I0926 17:53:12.718767    4178 buildroot.go:189] setting minikube options for container-runtime
	I0926 17:53:12.718921    4178 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:53:12.718934    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:12.719072    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:12.719167    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:12.719255    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.719341    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.719422    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:12.719544    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:12.719669    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0926 17:53:12.719676    4178 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0926 17:53:12.785771    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0926 17:53:12.785788    4178 buildroot.go:70] root file system type: tmpfs
	I0926 17:53:12.785872    4178 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0926 17:53:12.785886    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:12.786022    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:12.786110    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.786193    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.786273    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:12.786415    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:12.786558    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0926 17:53:12.786601    4178 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0926 17:53:12.862455    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0926 17:53:12.862477    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:12.862607    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:12.862705    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.862800    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:12.862882    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:12.863016    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:12.863156    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0926 17:53:12.863169    4178 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0926 17:53:14.510518    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0926 17:53:14.510534    4178 machine.go:96] duration metric: took 13.437211612s to provisionDockerMachine
	I0926 17:53:14.510545    4178 start.go:293] postStartSetup for "ha-476000" (driver="hyperkit")
	I0926 17:53:14.510553    4178 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0926 17:53:14.510563    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:14.510765    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0926 17:53:14.510780    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:14.510875    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:14.510981    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:14.511085    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:14.511186    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/id_rsa Username:docker}
	I0926 17:53:14.553095    4178 ssh_runner.go:195] Run: cat /etc/os-release
	I0926 17:53:14.556852    4178 info.go:137] Remote host: Buildroot 2023.02.9
	I0926 17:53:14.556867    4178 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1128/.minikube/addons for local assets ...
	I0926 17:53:14.556973    4178 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1128/.minikube/files for local assets ...
	I0926 17:53:14.557159    4178 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> 16792.pem in /etc/ssl/certs
	I0926 17:53:14.557167    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> /etc/ssl/certs/16792.pem
	I0926 17:53:14.557383    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0926 17:53:14.567060    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem --> /etc/ssl/certs/16792.pem (1708 bytes)
	I0926 17:53:14.600616    4178 start.go:296] duration metric: took 90.060103ms for postStartSetup
	I0926 17:53:14.600637    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:14.600819    4178 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0926 17:53:14.600832    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:14.600912    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:14.600992    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:14.601061    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:14.601150    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/id_rsa Username:docker}
	I0926 17:53:14.640650    4178 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0926 17:53:14.640716    4178 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0926 17:53:14.694957    4178 fix.go:56] duration metric: took 13.816065248s for fixHost
	I0926 17:53:14.694980    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:14.695115    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:14.695206    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:14.695301    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:14.695399    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:14.695527    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:14.695674    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0926 17:53:14.695682    4178 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0926 17:53:14.760098    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727398394.872717718
	
	I0926 17:53:14.760109    4178 fix.go:216] guest clock: 1727398394.872717718
	I0926 17:53:14.760115    4178 fix.go:229] Guest: 2024-09-26 17:53:14.872717718 -0700 PDT Remote: 2024-09-26 17:53:14.69497 -0700 PDT m=+14.262859348 (delta=177.747718ms)
	I0926 17:53:14.760134    4178 fix.go:200] guest clock delta is within tolerance: 177.747718ms
	I0926 17:53:14.760137    4178 start.go:83] releasing machines lock for "ha-476000", held for 13.881299475s
	I0926 17:53:14.760155    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:14.760297    4178 main.go:141] libmachine: (ha-476000) Calling .GetIP
	I0926 17:53:14.760395    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:14.760729    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:14.760850    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:14.760950    4178 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0926 17:53:14.760987    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:14.761013    4178 ssh_runner.go:195] Run: cat /version.json
	I0926 17:53:14.761025    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:14.761099    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:14.761116    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:14.761194    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:14.761205    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:14.761304    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:14.761313    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:14.761398    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/id_rsa Username:docker}
	I0926 17:53:14.761432    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/id_rsa Username:docker}
	I0926 17:53:14.795855    4178 ssh_runner.go:195] Run: systemctl --version
	I0926 17:53:14.843523    4178 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0926 17:53:14.848548    4178 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0926 17:53:14.848602    4178 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0926 17:53:14.862277    4178 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0926 17:53:14.862289    4178 start.go:495] detecting cgroup driver to use...
	I0926 17:53:14.862388    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 17:53:14.879332    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0926 17:53:14.888407    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0926 17:53:14.897249    4178 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0926 17:53:14.897300    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0926 17:53:14.906191    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 17:53:14.914943    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0926 17:53:14.923611    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 17:53:14.932390    4178 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0926 17:53:14.941382    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0926 17:53:14.950233    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0926 17:53:14.959047    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0926 17:53:14.967887    4178 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0926 17:53:14.975975    4178 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0926 17:53:14.976018    4178 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0926 17:53:14.985185    4178 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0926 17:53:14.993181    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:15.086628    4178 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0926 17:53:15.106310    4178 start.go:495] detecting cgroup driver to use...
	I0926 17:53:15.106396    4178 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0926 17:53:15.118546    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 17:53:15.129665    4178 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0926 17:53:15.143061    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 17:53:15.154154    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 17:53:15.164978    4178 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0926 17:53:15.188125    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 17:53:15.199509    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 17:53:15.214608    4178 ssh_runner.go:195] Run: which cri-dockerd
	I0926 17:53:15.217523    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0926 17:53:15.225391    4178 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0926 17:53:15.238858    4178 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0926 17:53:15.337444    4178 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0926 17:53:15.437802    4178 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0926 17:53:15.437879    4178 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0926 17:53:15.451733    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:15.563208    4178 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0926 17:53:17.891140    4178 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.327906141s)
	I0926 17:53:17.891209    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0926 17:53:17.902729    4178 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0926 17:53:17.915694    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0926 17:53:17.926164    4178 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0926 17:53:18.028587    4178 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0926 17:53:18.135687    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:18.246049    4178 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0926 17:53:18.259788    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0926 17:53:18.270995    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:18.379007    4178 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0926 17:53:18.442458    4178 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0926 17:53:18.442555    4178 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0926 17:53:18.447167    4178 start.go:563] Will wait 60s for crictl version
	I0926 17:53:18.447233    4178 ssh_runner.go:195] Run: which crictl
	I0926 17:53:18.450364    4178 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0926 17:53:18.474973    4178 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I0926 17:53:18.475082    4178 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0926 17:53:18.492744    4178 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0926 17:53:18.534852    4178 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I0926 17:53:18.534897    4178 main.go:141] libmachine: (ha-476000) Calling .GetIP
	I0926 17:53:18.535304    4178 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0926 17:53:18.539884    4178 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 17:53:18.549924    4178 kubeadm.go:883] updating cluster {Name:ha-476000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:ha-476000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0926 17:53:18.550017    4178 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 17:53:18.550087    4178 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0926 17:53:18.562413    4178 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0926 17:53:18.562429    4178 docker.go:615] Images already preloaded, skipping extraction
	I0926 17:53:18.562517    4178 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0926 17:53:18.574107    4178 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0926 17:53:18.574127    4178 cache_images.go:84] Images are preloaded, skipping loading
	I0926 17:53:18.574137    4178 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.1 docker true true} ...
	I0926 17:53:18.574213    4178 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-476000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-476000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0926 17:53:18.574296    4178 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0926 17:53:18.611557    4178 cni.go:84] Creating CNI manager for ""
	I0926 17:53:18.611571    4178 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0926 17:53:18.611586    4178 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0926 17:53:18.611607    4178 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-476000 NodeName:ha-476000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0926 17:53:18.611700    4178 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-476000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0926 17:53:18.611713    4178 kube-vip.go:115] generating kube-vip config ...
	I0926 17:53:18.611769    4178 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0926 17:53:18.624452    4178 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0926 17:53:18.624524    4178 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0926 17:53:18.624583    4178 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0926 17:53:18.632661    4178 binaries.go:44] Found k8s binaries, skipping transfer
	I0926 17:53:18.632722    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0926 17:53:18.640016    4178 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0926 17:53:18.653424    4178 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0926 17:53:18.666861    4178 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0926 17:53:18.680665    4178 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0926 17:53:18.694237    4178 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0926 17:53:18.697273    4178 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 17:53:18.706489    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:18.799127    4178 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 17:53:18.813428    4178 certs.go:68] Setting up /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000 for IP: 192.169.0.5
	I0926 17:53:18.813441    4178 certs.go:194] generating shared ca certs ...
	I0926 17:53:18.813450    4178 certs.go:226] acquiring lock for ca certs: {Name:mk3d4665cb5756ae49ea6f7cd2a58d273aecc507 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:53:18.813627    4178 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.key
	I0926 17:53:18.813697    4178 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.key
	I0926 17:53:18.813709    4178 certs.go:256] generating profile certs ...
	I0926 17:53:18.813816    4178 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/client.key
	I0926 17:53:18.813837    4178 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key.961e0ed9
	I0926 17:53:18.813853    4178 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt.961e0ed9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
	I0926 17:53:19.198737    4178 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt.961e0ed9 ...
	I0926 17:53:19.198759    4178 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt.961e0ed9: {Name:mkf72026f41cf052c5981dfd73bcc3ea46813a38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:53:19.199347    4178 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key.961e0ed9 ...
	I0926 17:53:19.199358    4178 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key.961e0ed9: {Name:mkb6fc9895bd700bb149434e702cedd545112b66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:53:19.199565    4178 certs.go:381] copying /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt.961e0ed9 -> /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt
	I0926 17:53:19.199778    4178 certs.go:385] copying /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key.961e0ed9 -> /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key
	I0926 17:53:19.200020    4178 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.key
	I0926 17:53:19.200030    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0926 17:53:19.200052    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0926 17:53:19.200071    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0926 17:53:19.200089    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0926 17:53:19.200107    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0926 17:53:19.200125    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0926 17:53:19.200142    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0926 17:53:19.200160    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0926 17:53:19.200250    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679.pem (1338 bytes)
	W0926 17:53:19.200297    4178 certs.go:480] ignoring /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679_empty.pem, impossibly tiny 0 bytes
	I0926 17:53:19.200306    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem (1675 bytes)
	I0926 17:53:19.200335    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem (1082 bytes)
	I0926 17:53:19.200365    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem (1123 bytes)
	I0926 17:53:19.200393    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem (1675 bytes)
	I0926 17:53:19.200455    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem (1708 bytes)
	I0926 17:53:19.200488    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679.pem -> /usr/share/ca-certificates/1679.pem
	I0926 17:53:19.200508    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> /usr/share/ca-certificates/16792.pem
	I0926 17:53:19.200526    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:53:19.200943    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0926 17:53:19.229781    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0926 17:53:19.249730    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0926 17:53:19.269922    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0926 17:53:19.290358    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0926 17:53:19.309964    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0926 17:53:19.329782    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0926 17:53:19.349170    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0926 17:53:19.368557    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679.pem --> /usr/share/ca-certificates/1679.pem (1338 bytes)
	I0926 17:53:19.388315    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem --> /usr/share/ca-certificates/16792.pem (1708 bytes)
	I0926 17:53:19.407646    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0926 17:53:19.427156    4178 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0926 17:53:19.441065    4178 ssh_runner.go:195] Run: openssl version
	I0926 17:53:19.445301    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1679.pem && ln -fs /usr/share/ca-certificates/1679.pem /etc/ssl/certs/1679.pem"
	I0926 17:53:19.453728    4178 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1679.pem
	I0926 17:53:19.457317    4178 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:36 /usr/share/ca-certificates/1679.pem
	I0926 17:53:19.457357    4178 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1679.pem
	I0926 17:53:19.461742    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1679.pem /etc/ssl/certs/51391683.0"
	I0926 17:53:19.470198    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16792.pem && ln -fs /usr/share/ca-certificates/16792.pem /etc/ssl/certs/16792.pem"
	I0926 17:53:19.478616    4178 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16792.pem
	I0926 17:53:19.482140    4178 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:36 /usr/share/ca-certificates/16792.pem
	I0926 17:53:19.482201    4178 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16792.pem
	I0926 17:53:19.486473    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16792.pem /etc/ssl/certs/3ec20f2e.0"
	I0926 17:53:19.494777    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0926 17:53:19.503295    4178 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:53:19.506902    4178 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:14 /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:53:19.506943    4178 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:53:19.511360    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0926 17:53:19.519826    4178 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0926 17:53:19.523465    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0926 17:53:19.528006    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0926 17:53:19.532444    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0926 17:53:19.537126    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0926 17:53:19.541512    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0926 17:53:19.545827    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0926 17:53:19.550166    4178 kubeadm.go:392] StartCluster: {Name:ha-476000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-476000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 17:53:19.550298    4178 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0926 17:53:19.561803    4178 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0926 17:53:19.569639    4178 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0926 17:53:19.569650    4178 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0926 17:53:19.569698    4178 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0926 17:53:19.577403    4178 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0926 17:53:19.577718    4178 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-476000" does not appear in /Users/jenkins/minikube-integration/19711-1128/kubeconfig
	I0926 17:53:19.577801    4178 kubeconfig.go:62] /Users/jenkins/minikube-integration/19711-1128/kubeconfig needs updating (will repair): [kubeconfig missing "ha-476000" cluster setting kubeconfig missing "ha-476000" context setting]
	I0926 17:53:19.577967    4178 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1128/kubeconfig: {Name:mkb9c069c578c490d8cb734b05a21821e9215482 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:53:19.578378    4178 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19711-1128/kubeconfig
	I0926 17:53:19.578577    4178 kapi.go:59] client config for ha-476000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/client.key", CAFile:"/Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7459f00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0926 17:53:19.578890    4178 cert_rotation.go:140] Starting client certificate rotation controller
	I0926 17:53:19.579075    4178 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0926 17:53:19.586457    4178 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0926 17:53:19.586468    4178 kubeadm.go:597] duration metric: took 16.814329ms to restartPrimaryControlPlane
	I0926 17:53:19.586474    4178 kubeadm.go:394] duration metric: took 36.313109ms to StartCluster
	I0926 17:53:19.586484    4178 settings.go:142] acquiring lock: {Name:mka8948d0f70add5c5f20f2eca7124a97a496c1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:53:19.586556    4178 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19711-1128/kubeconfig
	I0926 17:53:19.586877    4178 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1128/kubeconfig: {Name:mkb9c069c578c490d8cb734b05a21821e9215482 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:53:19.587096    4178 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 17:53:19.587108    4178 start.go:241] waiting for startup goroutines ...
	I0926 17:53:19.587128    4178 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0926 17:53:19.587252    4178 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:53:19.629430    4178 out.go:177] * Enabled addons: 
	I0926 17:53:19.650423    4178 addons.go:510] duration metric: took 63.269239ms for enable addons: enabled=[]
	I0926 17:53:19.650464    4178 start.go:246] waiting for cluster config update ...
	I0926 17:53:19.650475    4178 start.go:255] writing updated cluster config ...
	I0926 17:53:19.672508    4178 out.go:201] 
	I0926 17:53:19.693989    4178 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:53:19.694118    4178 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/config.json ...
	I0926 17:53:19.716427    4178 out.go:177] * Starting "ha-476000-m02" control-plane node in "ha-476000" cluster
	I0926 17:53:19.758555    4178 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 17:53:19.758588    4178 cache.go:56] Caching tarball of preloaded images
	I0926 17:53:19.758767    4178 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0926 17:53:19.758785    4178 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 17:53:19.758898    4178 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/config.json ...
	I0926 17:53:19.759817    4178 start.go:360] acquireMachinesLock for ha-476000-m02: {Name:mk62b0ec9b6170d1971239573141a625c4a2621e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 17:53:19.759922    4178 start.go:364] duration metric: took 80.364µs to acquireMachinesLock for "ha-476000-m02"
	I0926 17:53:19.759947    4178 start.go:96] Skipping create...Using existing machine configuration
	I0926 17:53:19.759956    4178 fix.go:54] fixHost starting: m02
	I0926 17:53:19.760406    4178 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:53:19.760442    4178 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:53:19.769605    4178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52063
	I0926 17:53:19.770014    4178 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:53:19.770353    4178 main.go:141] libmachine: Using API Version  1
	I0926 17:53:19.770365    4178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:53:19.770608    4178 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:53:19.770743    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	I0926 17:53:19.770835    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetState
	I0926 17:53:19.770922    4178 main.go:141] libmachine: (ha-476000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:53:19.771000    4178 main.go:141] libmachine: (ha-476000-m02) DBG | hyperkit pid from json: 4002
	I0926 17:53:19.771916    4178 main.go:141] libmachine: (ha-476000-m02) DBG | hyperkit pid 4002 missing from process table
	I0926 17:53:19.771940    4178 fix.go:112] recreateIfNeeded on ha-476000-m02: state=Stopped err=<nil>
	I0926 17:53:19.771957    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	W0926 17:53:19.772037    4178 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 17:53:19.814436    4178 out.go:177] * Restarting existing hyperkit VM for "ha-476000-m02" ...
	I0926 17:53:19.835535    4178 main.go:141] libmachine: (ha-476000-m02) Calling .Start
	I0926 17:53:19.835810    4178 main.go:141] libmachine: (ha-476000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:53:19.835874    4178 main.go:141] libmachine: (ha-476000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/hyperkit.pid
	I0926 17:53:19.837665    4178 main.go:141] libmachine: (ha-476000-m02) DBG | hyperkit pid 4002 missing from process table
	I0926 17:53:19.837678    4178 main.go:141] libmachine: (ha-476000-m02) DBG | pid 4002 is in state "Stopped"
	I0926 17:53:19.837694    4178 main.go:141] libmachine: (ha-476000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/hyperkit.pid...
	I0926 17:53:19.838041    4178 main.go:141] libmachine: (ha-476000-m02) DBG | Using UUID 58f499c4-942a-445b-bae0-ab27a7b8106e
	I0926 17:53:19.865707    4178 main.go:141] libmachine: (ha-476000-m02) DBG | Generated MAC 9e:5:36:80:93:e3
	I0926 17:53:19.865728    4178 main.go:141] libmachine: (ha-476000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000
	I0926 17:53:19.865872    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"58f499c4-942a-445b-bae0-ab27a7b8106e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aca80)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0926 17:53:19.865901    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"58f499c4-942a-445b-bae0-ab27a7b8106e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aca80)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0926 17:53:19.865946    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "58f499c4-942a-445b-bae0-ab27a7b8106e", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/ha-476000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/bzimage,/Users/jenkins/minikube-integration/19711-1128/.minikube/machine
s/ha-476000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000"}
	I0926 17:53:19.866020    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 58f499c4-942a-445b-bae0-ab27a7b8106e -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/ha-476000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/bzimage,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000"
	I0926 17:53:19.866041    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0926 17:53:19.867306    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 DEBUG: hyperkit: Pid is 4198
	I0926 17:53:19.867704    4178 main.go:141] libmachine: (ha-476000-m02) DBG | Attempt 0
	I0926 17:53:19.867718    4178 main.go:141] libmachine: (ha-476000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:53:19.867787    4178 main.go:141] libmachine: (ha-476000-m02) DBG | hyperkit pid from json: 4198
	I0926 17:53:19.869727    4178 main.go:141] libmachine: (ha-476000-m02) DBG | Searching for 9e:5:36:80:93:e3 in /var/db/dhcpd_leases ...
	I0926 17:53:19.869759    4178 main.go:141] libmachine: (ha-476000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0926 17:53:19.869772    4178 main.go:141] libmachine: (ha-476000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 17:53:19.869793    4178 main.go:141] libmachine: (ha-476000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 17:53:19.869821    4178 main.go:141] libmachine: (ha-476000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f751f8}
	I0926 17:53:19.869834    4178 main.go:141] libmachine: (ha-476000-m02) DBG | Found match: 9e:5:36:80:93:e3
	I0926 17:53:19.869848    4178 main.go:141] libmachine: (ha-476000-m02) DBG | IP: 192.169.0.6
	I0926 17:53:19.869914    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetConfigRaw
	I0926 17:53:19.870579    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetIP
	I0926 17:53:19.870762    4178 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/config.json ...
	I0926 17:53:19.871158    4178 machine.go:93] provisionDockerMachine start ...
	I0926 17:53:19.871172    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	I0926 17:53:19.871294    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:19.871392    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:19.871530    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:19.871631    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:19.871718    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:19.871893    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:19.872031    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0926 17:53:19.872038    4178 main.go:141] libmachine: About to run SSH command:
	hostname
	I0926 17:53:19.875766    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I0926 17:53:19.884496    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0926 17:53:19.885379    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 17:53:19.885391    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 17:53:19.885398    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 17:53:19.885403    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 17:53:20.270703    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:20 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0926 17:53:20.270718    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:20 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0926 17:53:20.385412    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 17:53:20.385431    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 17:53:20.385441    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 17:53:20.385468    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 17:53:20.386358    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:20 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0926 17:53:20.386369    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:20 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0926 17:53:25.988386    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:25 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0926 17:53:25.988424    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:25 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0926 17:53:25.988435    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:25 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0926 17:53:26.012163    4178 main.go:141] libmachine: (ha-476000-m02) DBG | 2024/09/26 17:53:26 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0926 17:53:30.140708    4178 main.go:141] libmachine: Error dialing TCP: dial tcp 192.169.0.6:22: connect: connection refused
	I0926 17:53:33.199866    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0926 17:53:33.199881    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetMachineName
	I0926 17:53:33.200004    4178 buildroot.go:166] provisioning hostname "ha-476000-m02"
	I0926 17:53:33.200013    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetMachineName
	I0926 17:53:33.200123    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:33.200213    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:33.200322    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.200426    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.200540    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:33.200702    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:33.200858    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0926 17:53:33.200867    4178 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-476000-m02 && echo "ha-476000-m02" | sudo tee /etc/hostname
	I0926 17:53:33.269037    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-476000-m02
	
	I0926 17:53:33.269056    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:33.269193    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:33.269285    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.269368    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.269450    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:33.269573    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:33.269735    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0926 17:53:33.269746    4178 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-476000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-476000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-476000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0926 17:53:33.331289    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 17:53:33.331305    4178 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19711-1128/.minikube CaCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19711-1128/.minikube}
	I0926 17:53:33.331314    4178 buildroot.go:174] setting up certificates
	I0926 17:53:33.331321    4178 provision.go:84] configureAuth start
	I0926 17:53:33.331328    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetMachineName
	I0926 17:53:33.331463    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetIP
	I0926 17:53:33.331556    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:33.331643    4178 provision.go:143] copyHostCerts
	I0926 17:53:33.331674    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem
	I0926 17:53:33.331734    4178 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem, removing ...
	I0926 17:53:33.331740    4178 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem
	I0926 17:53:33.331856    4178 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem (1082 bytes)
	I0926 17:53:33.332044    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem
	I0926 17:53:33.332093    4178 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem, removing ...
	I0926 17:53:33.332098    4178 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem
	I0926 17:53:33.332176    4178 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem (1123 bytes)
	I0926 17:53:33.332314    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem
	I0926 17:53:33.332352    4178 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem, removing ...
	I0926 17:53:33.332356    4178 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem
	I0926 17:53:33.332427    4178 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem (1675 bytes)
	I0926 17:53:33.332570    4178 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem org=jenkins.ha-476000-m02 san=[127.0.0.1 192.169.0.6 ha-476000-m02 localhost minikube]
	I0926 17:53:33.395607    4178 provision.go:177] copyRemoteCerts
	I0926 17:53:33.395696    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0926 17:53:33.395715    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:33.395906    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:33.396015    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.396100    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:33.396196    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/id_rsa Username:docker}
	I0926 17:53:33.431740    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0926 17:53:33.431806    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0926 17:53:33.452053    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0926 17:53:33.452106    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0926 17:53:33.471760    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0926 17:53:33.471825    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0926 17:53:33.490896    4178 provision.go:87] duration metric: took 159.567474ms to configureAuth
	I0926 17:53:33.490910    4178 buildroot.go:189] setting minikube options for container-runtime
	I0926 17:53:33.491086    4178 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:53:33.491099    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	I0926 17:53:33.491231    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:33.491321    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:33.491413    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.491498    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.491591    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:33.491713    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:33.491847    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0926 17:53:33.491854    4178 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0926 17:53:33.547403    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0926 17:53:33.547417    4178 buildroot.go:70] root file system type: tmpfs
	I0926 17:53:33.547504    4178 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0926 17:53:33.547518    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:33.547665    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:33.547775    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.547896    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.547997    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:33.548125    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:33.548268    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0926 17:53:33.548312    4178 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0926 17:53:33.613348    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0926 17:53:33.613367    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:33.613495    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:33.613582    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.613661    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:33.613747    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:33.613879    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:33.614018    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0926 17:53:33.614033    4178 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0926 17:53:35.261247    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0926 17:53:35.261262    4178 machine.go:96] duration metric: took 15.390039559s to provisionDockerMachine
	I0926 17:53:35.261270    4178 start.go:293] postStartSetup for "ha-476000-m02" (driver="hyperkit")
	I0926 17:53:35.261294    4178 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0926 17:53:35.261308    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	I0926 17:53:35.261509    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0926 17:53:35.261522    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:35.261612    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:35.261704    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:35.261809    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:35.261922    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/id_rsa Username:docker}
	I0926 17:53:35.302268    4178 ssh_runner.go:195] Run: cat /etc/os-release
	I0926 17:53:35.305656    4178 info.go:137] Remote host: Buildroot 2023.02.9
	I0926 17:53:35.305666    4178 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1128/.minikube/addons for local assets ...
	I0926 17:53:35.305765    4178 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1128/.minikube/files for local assets ...
	I0926 17:53:35.305947    4178 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> 16792.pem in /etc/ssl/certs
	I0926 17:53:35.305953    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> /etc/ssl/certs/16792.pem
	I0926 17:53:35.306171    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0926 17:53:35.314020    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem --> /etc/ssl/certs/16792.pem (1708 bytes)
	I0926 17:53:35.344643    4178 start.go:296] duration metric: took 83.349532ms for postStartSetup
	I0926 17:53:35.344681    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	I0926 17:53:35.344863    4178 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0926 17:53:35.344877    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:35.344965    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:35.345056    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:35.345137    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:35.345223    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/id_rsa Username:docker}
	I0926 17:53:35.381164    4178 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0926 17:53:35.381229    4178 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0926 17:53:35.414571    4178 fix.go:56] duration metric: took 15.654555871s for fixHost
	I0926 17:53:35.414597    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:35.414747    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:35.414839    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:35.414932    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:35.415022    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:35.415156    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:53:35.415295    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0926 17:53:35.415302    4178 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0926 17:53:35.472100    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727398415.586409353
	
	I0926 17:53:35.472129    4178 fix.go:216] guest clock: 1727398415.586409353
	I0926 17:53:35.472134    4178 fix.go:229] Guest: 2024-09-26 17:53:35.586409353 -0700 PDT Remote: 2024-09-26 17:53:35.414586 -0700 PDT m=+34.982399519 (delta=171.823353ms)
	I0926 17:53:35.472150    4178 fix.go:200] guest clock delta is within tolerance: 171.823353ms
	I0926 17:53:35.472153    4178 start.go:83] releasing machines lock for "ha-476000-m02", held for 15.712162695s
	I0926 17:53:35.472170    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	I0926 17:53:35.472305    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetIP
	I0926 17:53:35.513568    4178 out.go:177] * Found network options:
	I0926 17:53:35.535552    4178 out.go:177]   - NO_PROXY=192.169.0.5
	W0926 17:53:35.557416    4178 proxy.go:119] fail to check proxy env: Error ip not in block
	I0926 17:53:35.557455    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	I0926 17:53:35.558341    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	I0926 17:53:35.558579    4178 main.go:141] libmachine: (ha-476000-m02) Calling .DriverName
	I0926 17:53:35.558709    4178 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0926 17:53:35.558764    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	W0926 17:53:35.558835    4178 proxy.go:119] fail to check proxy env: Error ip not in block
	I0926 17:53:35.558964    4178 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0926 17:53:35.558985    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHHostname
	I0926 17:53:35.559000    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:35.559215    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHPort
	I0926 17:53:35.559232    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:35.559433    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHKeyPath
	I0926 17:53:35.559464    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:35.559662    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetSSHUsername
	I0926 17:53:35.559681    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/id_rsa Username:docker}
	I0926 17:53:35.559790    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m02/id_rsa Username:docker}
	W0926 17:53:35.596059    4178 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0926 17:53:35.596139    4178 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0926 17:53:35.610162    4178 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0926 17:53:35.610178    4178 start.go:495] detecting cgroup driver to use...
	I0926 17:53:35.610237    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 17:53:35.646709    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0926 17:53:35.656640    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0926 17:53:35.665578    4178 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0926 17:53:35.665623    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0926 17:53:35.674574    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 17:53:35.683489    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0926 17:53:35.692471    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 17:53:35.701275    4178 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0926 17:53:35.710401    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0926 17:53:35.719421    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0926 17:53:35.728448    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0926 17:53:35.738067    4178 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0926 17:53:35.746743    4178 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0926 17:53:35.746802    4178 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0926 17:53:35.755939    4178 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0926 17:53:35.763977    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:35.862563    4178 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0926 17:53:35.881531    4178 start.go:495] detecting cgroup driver to use...
	I0926 17:53:35.881616    4178 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0926 17:53:35.899471    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 17:53:35.910823    4178 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0926 17:53:35.923558    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 17:53:35.935946    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 17:53:35.946007    4178 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0926 17:53:35.969898    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 17:53:35.980115    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 17:53:35.995271    4178 ssh_runner.go:195] Run: which cri-dockerd
	I0926 17:53:35.998508    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0926 17:53:36.005810    4178 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0926 17:53:36.019492    4178 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0926 17:53:36.116976    4178 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0926 17:53:36.228090    4178 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0926 17:53:36.228117    4178 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0926 17:53:36.242164    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:36.335597    4178 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0926 17:53:38.678847    4178 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.343223137s)
	I0926 17:53:38.678917    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0926 17:53:38.689531    4178 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0926 17:53:38.702816    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0926 17:53:38.713151    4178 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0926 17:53:38.819068    4178 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0926 17:53:38.926667    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:39.040074    4178 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0926 17:53:39.054197    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0926 17:53:39.065256    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:39.163219    4178 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0926 17:53:39.228416    4178 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0926 17:53:39.228518    4178 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0926 17:53:39.233191    4178 start.go:563] Will wait 60s for crictl version
	I0926 17:53:39.233249    4178 ssh_runner.go:195] Run: which crictl
	I0926 17:53:39.236580    4178 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0926 17:53:39.262407    4178 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I0926 17:53:39.262495    4178 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0926 17:53:39.279010    4178 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0926 17:53:39.317905    4178 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I0926 17:53:39.359545    4178 out.go:177]   - env NO_PROXY=192.169.0.5
	I0926 17:53:39.381103    4178 main.go:141] libmachine: (ha-476000-m02) Calling .GetIP
	I0926 17:53:39.381320    4178 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0926 17:53:39.384579    4178 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 17:53:39.394395    4178 mustload.go:65] Loading cluster: ha-476000
	I0926 17:53:39.394560    4178 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:53:39.394810    4178 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:53:39.394834    4178 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:53:39.403482    4178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52086
	I0926 17:53:39.403823    4178 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:53:39.404150    4178 main.go:141] libmachine: Using API Version  1
	I0926 17:53:39.404164    4178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:53:39.404434    4178 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:53:39.404542    4178 main.go:141] libmachine: (ha-476000) Calling .GetState
	I0926 17:53:39.404632    4178 main.go:141] libmachine: (ha-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:53:39.404706    4178 main.go:141] libmachine: (ha-476000) DBG | hyperkit pid from json: 4191
	I0926 17:53:39.405678    4178 host.go:66] Checking if "ha-476000" exists ...
	I0926 17:53:39.405956    4178 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:53:39.405986    4178 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:53:39.414686    4178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52088
	I0926 17:53:39.415056    4178 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:53:39.415379    4178 main.go:141] libmachine: Using API Version  1
	I0926 17:53:39.415388    4178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:53:39.415605    4178 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:53:39.415728    4178 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:53:39.415830    4178 certs.go:68] Setting up /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000 for IP: 192.169.0.6
	I0926 17:53:39.415836    4178 certs.go:194] generating shared ca certs ...
	I0926 17:53:39.415849    4178 certs.go:226] acquiring lock for ca certs: {Name:mk3d4665cb5756ae49ea6f7cd2a58d273aecc507 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:53:39.416032    4178 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.key
	I0926 17:53:39.416108    4178 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.key
	I0926 17:53:39.416119    4178 certs.go:256] generating profile certs ...
	I0926 17:53:39.416243    4178 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/client.key
	I0926 17:53:39.416331    4178 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key.462632c0
	I0926 17:53:39.416399    4178 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.key
	I0926 17:53:39.416406    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0926 17:53:39.416427    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0926 17:53:39.416446    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0926 17:53:39.416465    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0926 17:53:39.416482    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0926 17:53:39.416510    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0926 17:53:39.416544    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0926 17:53:39.416564    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0926 17:53:39.416666    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679.pem (1338 bytes)
	W0926 17:53:39.416716    4178 certs.go:480] ignoring /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679_empty.pem, impossibly tiny 0 bytes
	I0926 17:53:39.416725    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem (1675 bytes)
	I0926 17:53:39.416762    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem (1082 bytes)
	I0926 17:53:39.416795    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem (1123 bytes)
	I0926 17:53:39.416828    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem (1675 bytes)
	I0926 17:53:39.416893    4178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem (1708 bytes)
	I0926 17:53:39.416929    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:53:39.416949    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679.pem -> /usr/share/ca-certificates/1679.pem
	I0926 17:53:39.416967    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> /usr/share/ca-certificates/16792.pem
	I0926 17:53:39.416991    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:53:39.417078    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:53:39.417153    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:53:39.417237    4178 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:53:39.417320    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/id_rsa Username:docker}
	I0926 17:53:39.447975    4178 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0926 17:53:39.451073    4178 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0926 17:53:39.458912    4178 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0926 17:53:39.462003    4178 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0926 17:53:39.470783    4178 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0926 17:53:39.473836    4178 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0926 17:53:39.481537    4178 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0926 17:53:39.484645    4178 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0926 17:53:39.492945    4178 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0926 17:53:39.495978    4178 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0926 17:53:39.503610    4178 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0926 17:53:39.506808    4178 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0926 17:53:39.514787    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0926 17:53:39.534891    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0926 17:53:39.554745    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0926 17:53:39.574668    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0926 17:53:39.594523    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0926 17:53:39.614131    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0926 17:53:39.633606    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0926 17:53:39.653376    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0926 17:53:39.673369    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0926 17:53:39.692952    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679.pem --> /usr/share/ca-certificates/1679.pem (1338 bytes)
	I0926 17:53:39.712634    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem --> /usr/share/ca-certificates/16792.pem (1708 bytes)
	I0926 17:53:39.732005    4178 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0926 17:53:39.745464    4178 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0926 17:53:39.759232    4178 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0926 17:53:39.772911    4178 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0926 17:53:39.786441    4178 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0926 17:53:39.800266    4178 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0926 17:53:39.813927    4178 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0926 17:53:39.827332    4178 ssh_runner.go:195] Run: openssl version
	I0926 17:53:39.831566    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16792.pem && ln -fs /usr/share/ca-certificates/16792.pem /etc/ssl/certs/16792.pem"
	I0926 17:53:39.839850    4178 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16792.pem
	I0926 17:53:39.843163    4178 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:36 /usr/share/ca-certificates/16792.pem
	I0926 17:53:39.843206    4178 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16792.pem
	I0926 17:53:39.847374    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16792.pem /etc/ssl/certs/3ec20f2e.0"
	I0926 17:53:39.855624    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0926 17:53:39.863965    4178 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:53:39.867400    4178 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:14 /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:53:39.867452    4178 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:53:39.871715    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0926 17:53:39.879907    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1679.pem && ln -fs /usr/share/ca-certificates/1679.pem /etc/ssl/certs/1679.pem"
	I0926 17:53:39.888247    4178 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1679.pem
	I0926 17:53:39.891606    4178 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:36 /usr/share/ca-certificates/1679.pem
	I0926 17:53:39.891654    4178 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1679.pem
	I0926 17:53:39.895855    4178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1679.pem /etc/ssl/certs/51391683.0"
	I0926 17:53:39.904043    4178 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0926 17:53:39.907450    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0926 17:53:39.911778    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0926 17:53:39.915909    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0926 17:53:39.920037    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0926 17:53:39.924167    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0926 17:53:39.928372    4178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0926 17:53:39.932543    4178 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.1 docker true true} ...
	I0926 17:53:39.932604    4178 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-476000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-476000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0926 17:53:39.932624    4178 kube-vip.go:115] generating kube-vip config ...
	I0926 17:53:39.932670    4178 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0926 17:53:39.944715    4178 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0926 17:53:39.944753    4178 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0926 17:53:39.944822    4178 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0926 17:53:39.953541    4178 binaries.go:44] Found k8s binaries, skipping transfer
	I0926 17:53:39.953597    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0926 17:53:39.961618    4178 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0926 17:53:39.975007    4178 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0926 17:53:39.988472    4178 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0926 17:53:40.002021    4178 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0926 17:53:40.004933    4178 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 17:53:40.015059    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:40.118867    4178 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 17:53:40.133377    4178 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 17:53:40.133568    4178 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:53:40.154757    4178 out.go:177] * Verifying Kubernetes components...
	I0926 17:53:40.196346    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:53:40.323445    4178 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 17:53:40.338817    4178 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19711-1128/kubeconfig
	I0926 17:53:40.339037    4178 kapi.go:59] client config for ha-476000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/client.key", CAFile:"/Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7459f00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0926 17:53:40.339084    4178 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0926 17:53:40.339280    4178 node_ready.go:35] waiting up to 6m0s for node "ha-476000-m02" to be "Ready" ...
	I0926 17:53:40.339354    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:40.339359    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:40.339366    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:40.339369    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:47.201921    4178 round_trippers.go:574] Response Status:  in 6862 milliseconds
	I0926 17:53:48.202681    4178 with_retry.go:234] Got a Retry-After 1s response for attempt 1 to https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:48.202709    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:48.202713    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:48.202720    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:48.202724    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:49.203128    4178 round_trippers.go:574] Response Status:  in 1000 milliseconds
	I0926 17:53:49.203194    4178 node_ready.go:53] error getting node "ha-476000-m02": Get "https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.1:52091->192.169.0.5:8443: read: connection reset by peer
	I0926 17:53:49.203240    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:49.203247    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:49.203252    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:49.203256    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:50.204478    4178 round_trippers.go:574] Response Status:  in 1001 milliseconds
	I0926 17:53:50.204619    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:50.204631    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:50.204642    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:50.204649    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:51.204974    4178 round_trippers.go:574] Response Status:  in 1000 milliseconds
	I0926 17:53:51.205045    4178 node_ready.go:53] error getting node "ha-476000-m02": Get "https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02": dial tcp 192.169.0.5:8443: connect: connection refused
	I0926 17:53:51.205098    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:51.205108    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:51.205118    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:51.205124    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:52.205352    4178 round_trippers.go:574] Response Status:  in 1000 milliseconds
	I0926 17:53:52.205474    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:52.205485    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:52.205496    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:52.205505    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:53.206703    4178 round_trippers.go:574] Response Status:  in 1001 milliseconds
	I0926 17:53:53.206766    4178 node_ready.go:53] error getting node "ha-476000-m02": Get "https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02": dial tcp 192.169.0.5:8443: connect: connection refused
	I0926 17:53:53.206822    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:53.206831    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:53.206843    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:53.206849    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:54.208032    4178 round_trippers.go:574] Response Status:  in 1001 milliseconds
	I0926 17:53:54.208160    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:54.208172    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:54.208183    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:54.208190    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:55.208420    4178 round_trippers.go:574] Response Status:  in 1000 milliseconds
	I0926 17:53:55.208484    4178 node_ready.go:53] error getting node "ha-476000-m02": Get "https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02": dial tcp 192.169.0.5:8443: connect: connection refused
	I0926 17:53:55.208561    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:55.208572    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:55.208582    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:55.208586    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:56.209388    4178 round_trippers.go:574] Response Status:  in 1000 milliseconds
	I0926 17:53:56.209496    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:56.209507    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:56.209517    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:56.209529    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:57.211492    4178 round_trippers.go:574] Response Status:  in 1001 milliseconds
	I0926 17:53:57.211560    4178 node_ready.go:53] error getting node "ha-476000-m02": Get "https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02": dial tcp 192.169.0.5:8443: connect: connection refused
	I0926 17:53:57.211643    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:57.211654    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:57.211665    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:57.211671    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:58.213441    4178 round_trippers.go:574] Response Status:  in 1001 milliseconds
	I0926 17:53:58.213520    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:58.213528    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:58.213535    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:58.213538    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:53:59.215627    4178 round_trippers.go:574] Response Status:  in 1002 milliseconds
	I0926 17:53:59.215689    4178 node_ready.go:53] error getting node "ha-476000-m02": Get "https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02": dial tcp 192.169.0.5:8443: connect: connection refused
	I0926 17:53:59.215761    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:53:59.215770    4178 round_trippers.go:469] Request Headers:
	I0926 17:53:59.215781    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:53:59.215792    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:00.214970    4178 round_trippers.go:574] Response Status:  in 999 milliseconds
	I0926 17:54:00.215057    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:00.215066    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:00.215072    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:00.215075    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:02.766651    4178 round_trippers.go:574] Response Status: 200 OK in 2551 milliseconds
	I0926 17:54:02.767320    4178 node_ready.go:53] node "ha-476000-m02" has status "Ready":"False"
	I0926 17:54:02.767364    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:02.767371    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:02.767378    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:02.767382    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:02.808455    4178 round_trippers.go:574] Response Status: 200 OK in 41 milliseconds
	I0926 17:54:02.839499    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:02.839515    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:02.839522    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:02.839524    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:02.844502    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:03.339950    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:03.339974    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:03.340014    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:03.340033    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:03.343931    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:03.839836    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:03.839849    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:03.839855    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:03.839859    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:03.842811    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:04.340378    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:04.340403    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:04.340414    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:04.340421    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:04.344418    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:04.839736    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:04.839752    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:04.839758    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:04.839762    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:04.842629    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:04.843116    4178 node_ready.go:49] node "ha-476000-m02" has status "Ready":"True"
	I0926 17:54:04.843129    4178 node_ready.go:38] duration metric: took 24.503742617s for node "ha-476000-m02" to be "Ready" ...
	I0926 17:54:04.843136    4178 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0926 17:54:04.843170    4178 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0926 17:54:04.843178    4178 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0926 17:54:04.843227    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0926 17:54:04.843232    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:04.843238    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:04.843242    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:04.851447    4178 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0926 17:54:04.858185    4178 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:04.858238    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:04.858243    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:04.858250    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:04.858254    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:04.860121    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:04.860597    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:04.860608    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:04.860614    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:04.860619    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:04.862704    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:05.358322    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:05.358334    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:05.358341    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:05.358344    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:05.361386    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:05.361939    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:05.361947    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:05.361954    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:05.361958    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:05.366335    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:05.858443    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:05.858462    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:05.858485    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:05.858489    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:05.861181    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:05.861691    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:05.861698    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:05.861704    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:05.861706    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:05.863911    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:06.359311    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:06.359342    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:06.359350    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:06.359354    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:06.362329    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:06.362841    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:06.362848    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:06.362854    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:06.362864    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:06.365951    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:06.860115    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:06.860140    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:06.860152    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:06.860192    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:06.863829    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:06.864356    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:06.864364    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:06.864370    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:06.864372    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:06.866293    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:06.866641    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:07.359755    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:07.359781    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:07.359791    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:07.359796    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:07.362929    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:07.363432    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:07.363440    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:07.363449    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:07.363454    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:07.365354    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:07.859403    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:07.859428    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:07.859440    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:07.859447    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:07.863936    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:07.864482    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:07.864489    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:07.864494    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:07.864497    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:07.866695    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:08.359070    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:08.359095    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:08.359104    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:08.359110    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:08.363413    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:08.363975    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:08.363983    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:08.363989    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:08.363996    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:08.366160    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:08.858562    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:08.858596    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:08.858604    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:08.858608    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:08.861584    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:08.862306    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:08.862313    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:08.862319    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:08.862329    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:08.864555    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:09.359666    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:09.359694    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:09.359706    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:09.359710    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:09.364444    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:09.364796    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:09.364802    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:09.364808    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:09.364812    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:09.367017    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:09.367391    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:09.859578    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:09.859628    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:09.859645    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:09.859654    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:09.863289    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:09.863926    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:09.863934    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:09.863940    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:09.863942    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:09.865998    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:10.358368    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:10.358385    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:10.358391    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:10.358396    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:10.366195    4178 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0926 17:54:10.366734    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:10.366743    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:10.366752    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:10.366755    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:10.369544    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:10.859656    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:10.859683    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:10.859694    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:10.859701    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:10.864043    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:10.864491    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:10.864499    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:10.864504    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:10.864508    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:10.866558    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:11.360000    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:11.360026    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:11.360038    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:11.360045    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:11.364064    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:11.364604    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:11.364611    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:11.364617    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:11.364620    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:11.366561    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:11.859988    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:11.860011    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:11.860023    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:11.860028    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:11.863780    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:11.864488    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:11.864496    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:11.864502    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:11.864505    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:11.866527    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:11.866879    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:12.359231    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:12.359302    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:12.359317    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:12.359325    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:12.363142    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:12.363807    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:12.363815    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:12.363820    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:12.363823    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:12.365720    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:12.859295    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:12.859321    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:12.859332    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:12.859336    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:12.863604    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:12.864232    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:12.864243    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:12.864249    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:12.864252    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:12.866340    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:13.360473    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:13.360500    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:13.360511    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:13.360516    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:13.364925    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:13.365659    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:13.365667    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:13.365672    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:13.365677    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:13.367805    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:13.858451    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:13.858477    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:13.858490    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:13.858495    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:13.862381    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:13.862921    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:13.862929    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:13.862934    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:13.862938    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:13.864941    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:14.358942    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:14.358966    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:14.359005    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:14.359013    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:14.365723    4178 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0926 17:54:14.366181    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:14.366189    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:14.366193    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:14.366197    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:14.368552    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:14.368954    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:14.860475    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:14.860501    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:14.860543    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:14.860550    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:14.864207    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:14.864620    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:14.864628    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:14.864634    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:14.864637    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:14.866896    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:15.358734    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:15.358751    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:15.358757    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:15.358761    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:15.361477    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:15.362047    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:15.362056    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:15.362062    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:15.362072    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:15.364404    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:15.859641    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:15.859669    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:15.859681    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:15.859690    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:15.864301    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:15.864755    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:15.864762    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:15.864767    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:15.864771    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:15.866941    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:16.358689    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:16.358713    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:16.358762    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:16.358771    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:16.363038    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:16.363637    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:16.363644    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:16.363649    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:16.363665    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:16.365580    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:16.858829    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:16.858848    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:16.858857    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:16.858864    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:16.861418    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:16.861895    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:16.861903    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:16.861908    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:16.861913    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:16.864330    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:16.864660    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:17.358538    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:17.358557    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:17.358576    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:17.358581    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:17.361634    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:17.362216    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:17.362224    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:17.362230    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:17.362235    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:17.364368    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:17.858951    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:17.859025    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:17.859068    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:17.859083    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:17.863132    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:17.863643    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:17.863651    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:17.863660    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:17.863665    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:17.865816    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:18.358377    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:18.358396    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:18.358403    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:18.358429    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:18.364859    4178 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0926 17:54:18.365288    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:18.365296    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:18.365303    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:18.365306    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:18.367423    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:18.859211    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:18.859237    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:18.859250    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:18.859257    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:18.863321    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:18.863832    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:18.863840    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:18.863846    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:18.863849    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:18.865860    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:18.866261    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:19.358438    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:19.358453    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:19.358460    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:19.358463    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:19.361068    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:19.361685    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:19.361694    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:19.361700    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:19.361703    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:19.364079    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:19.859935    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:19.859961    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:19.859972    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:19.859979    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:19.864189    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:19.864623    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:19.864630    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:19.864638    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:19.864641    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:19.866680    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:20.359100    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:20.359154    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:20.359164    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:20.359169    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:20.362081    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:20.362587    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:20.362595    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:20.362601    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:20.362604    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:20.364581    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:20.860535    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:20.860561    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:20.860573    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:20.860581    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:20.864595    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:20.865051    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:20.865063    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:20.865070    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:20.865074    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:20.866939    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:20.867377    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:21.358839    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:21.358864    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:21.358910    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:21.358919    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:21.362304    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:21.362899    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:21.362907    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:21.362913    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:21.362923    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:21.364904    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:21.859198    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:21.859224    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:21.859235    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:21.859244    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:21.863464    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:21.863902    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:21.863911    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:21.863916    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:21.863920    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:21.866008    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:22.358500    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:22.358557    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:22.358567    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:22.358581    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:22.363039    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:22.363487    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:22.363495    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:22.363501    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:22.363504    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:22.365560    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:22.860486    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:22.860511    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:22.860523    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:22.860549    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:22.865059    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:22.865691    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:22.865699    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:22.865705    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:22.865708    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:22.867780    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:22.868136    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:23.358997    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:23.359023    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:23.359035    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:23.359043    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:23.363268    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:23.363930    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:23.363938    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:23.363944    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:23.363948    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:23.365982    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:23.858407    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:23.858421    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:23.858452    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:23.858457    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:23.861385    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:23.861801    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:23.861812    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:23.861818    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:23.861823    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:23.864061    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:24.360526    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:24.360553    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:24.360565    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:24.360571    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:24.364721    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:24.365349    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:24.365356    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:24.365362    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:24.365365    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:24.367430    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:24.858605    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:24.858630    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:24.858641    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:24.858648    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:24.862472    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:24.863003    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:24.863010    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:24.863016    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:24.863018    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:24.864908    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:25.358639    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:25.358664    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:25.358677    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:25.358684    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:25.362945    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:25.363487    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:25.363495    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:25.363501    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:25.363503    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:25.365691    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:25.366062    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:25.859315    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:25.859333    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:25.859341    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:25.859364    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:25.862801    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:25.863276    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:25.863284    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:25.863289    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:25.863293    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:25.865685    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:26.359001    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:26.359015    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:26.359021    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:26.359025    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:26.361573    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:26.362094    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:26.362101    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:26.362107    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:26.362111    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:26.364144    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:26.858599    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:26.858625    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:26.858637    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:26.858644    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:26.862247    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:26.862753    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:26.862762    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:26.862767    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:26.862771    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:26.864571    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:27.358862    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:27.358888    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:27.358899    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:27.358904    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:27.363109    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:27.363648    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:27.363657    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:27.363663    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:27.363669    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:27.365500    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:27.859752    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:27.859779    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:27.859790    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:27.859795    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:27.864255    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:27.864725    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:27.864733    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:27.864738    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:27.864741    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:27.866764    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:27.867055    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:28.359808    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:28.359835    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:28.359882    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:28.359890    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:28.363146    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:28.363572    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:28.363579    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:28.363585    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:28.363589    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:28.365498    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:28.858708    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:28.858734    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:28.858746    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:28.858752    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:28.862673    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:28.863231    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:28.863238    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:28.863244    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:28.863248    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:28.865181    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:29.359611    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:29.359640    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:29.359653    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:29.359660    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:29.362965    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:29.363411    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:29.363419    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:29.363425    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:29.363427    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:29.365174    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:29.859384    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:29.859402    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:29.859409    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:29.859414    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:29.862499    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:29.863033    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:29.863041    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:29.863047    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:29.863050    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:29.865154    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:30.359191    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:30.359209    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:30.359255    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:30.359265    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:30.361836    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:30.362303    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:30.362312    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:30.362317    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:30.362320    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:30.364567    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:30.364980    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:30.860033    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:30.860066    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:30.860101    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:30.860109    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:30.864359    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:30.864782    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:30.864790    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:30.864799    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:30.864805    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:30.866798    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:31.358678    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:31.358711    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:31.358762    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:31.358772    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:31.363329    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:31.363731    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:31.363739    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:31.363745    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:31.363751    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:31.365894    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:31.858683    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:31.858706    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:31.858718    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:31.858724    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:31.862717    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:31.863254    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:31.863262    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:31.863268    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:31.863272    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:31.865220    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:32.359370    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:32.359420    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:32.359434    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:32.359442    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:32.362904    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:32.363502    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:32.363510    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:32.363516    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:32.363518    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:32.365729    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:32.366016    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:32.859955    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:32.859990    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:32.859997    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:32.860001    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:32.874510    4178 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0926 17:54:32.875130    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:32.875137    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:32.875142    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:32.875145    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:32.883403    4178 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0926 17:54:33.359964    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:33.360006    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:33.360019    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:33.360025    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:33.362527    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:33.362934    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:33.362942    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:33.362948    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:33.362953    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:33.365277    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:33.860043    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:33.860070    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:33.860082    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:33.860089    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:33.864487    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:33.864960    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:33.864968    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:33.864974    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:33.864978    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:33.866813    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:34.359408    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:34.359422    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:34.359453    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:34.359457    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:34.361843    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:34.362407    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:34.362415    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:34.362419    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:34.362427    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:34.364587    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:34.859087    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:34.859113    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:34.859124    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:34.859132    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:34.863123    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:34.863508    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:34.863516    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:34.863522    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:34.863525    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:34.865516    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:34.865853    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:35.359972    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:35.359997    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:35.360039    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:35.360048    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:35.364311    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:35.364957    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:35.364964    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:35.364970    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:35.364974    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:35.367232    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:35.859251    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:35.859265    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:35.859271    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:35.859275    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:35.861746    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:35.862292    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:35.862304    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:35.862318    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:35.862323    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:35.864289    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:36.360234    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:36.360274    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:36.360284    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:36.360291    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:36.363297    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:36.363726    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:36.363734    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:36.363740    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:36.363743    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:36.365689    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:36.859037    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:36.859105    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:36.859119    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:36.859130    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:36.863205    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:36.863621    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:36.863629    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:36.863635    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:36.863638    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:36.865642    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:36.865933    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:37.359101    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:37.359127    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:37.359139    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:37.359145    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:37.363256    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:37.363851    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:37.363859    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:37.363865    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:37.363868    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:37.365908    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:37.859282    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:37.859308    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:37.859319    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:37.859325    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:37.863341    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:37.863718    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:37.863726    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:37.863731    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:37.863735    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:37.865672    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:38.359013    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:38.359055    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:38.359065    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:38.359070    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:38.361936    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:38.362521    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:38.362529    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:38.362534    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:38.362538    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:38.364699    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:38.859426    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:38.859453    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:38.859466    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:38.859475    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:38.863509    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:38.864012    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:38.864020    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:38.864025    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:38.864029    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:38.866259    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:38.866728    4178 pod_ready.go:103] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"False"
	I0926 17:54:39.358730    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:39.358748    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:39.358756    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:39.358765    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:39.362410    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:39.362956    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:39.362964    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:39.362970    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:39.362979    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:39.365004    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:39.858564    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:39.858584    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:39.858592    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:39.858598    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:39.861794    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:39.862200    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:39.862208    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:39.862214    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:39.862219    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:39.864175    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:40.358549    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:40.358586    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.358596    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.358600    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.361533    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:40.362003    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:40.362011    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.362017    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.362020    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.364141    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:40.860048    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-44l9n
	I0926 17:54:40.860077    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.860087    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.860093    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.863900    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:40.864305    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:40.864314    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.864320    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.864322    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.866266    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:40.866599    4178 pod_ready.go:93] pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:40.866610    4178 pod_ready.go:82] duration metric: took 36.008276067s for pod "coredns-7c65d6cfc9-44l9n" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:40.866616    4178 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7jwgv" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:40.866646    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-7jwgv
	I0926 17:54:40.866651    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.866657    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.866661    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.868466    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:40.868930    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:40.868938    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.868944    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.868948    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.870736    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:40.871103    4178 pod_ready.go:93] pod "coredns-7c65d6cfc9-7jwgv" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:40.871111    4178 pod_ready.go:82] duration metric: took 4.489575ms for pod "coredns-7c65d6cfc9-7jwgv" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:40.871118    4178 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-476000" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:40.871146    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-476000
	I0926 17:54:40.871150    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.871156    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.871160    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.873206    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:40.873700    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:40.873707    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.873713    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.873717    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.875461    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:40.875829    4178 pod_ready.go:93] pod "etcd-ha-476000" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:40.875837    4178 pod_ready.go:82] duration metric: took 4.713943ms for pod "etcd-ha-476000" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:40.875844    4178 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-476000-m02" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:40.875875    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-476000-m02
	I0926 17:54:40.875880    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.875885    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.875890    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.877741    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:40.878137    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:40.878145    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.878151    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.878155    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.880023    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:40.880375    4178 pod_ready.go:93] pod "etcd-ha-476000-m02" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:40.880384    4178 pod_ready.go:82] duration metric: took 4.534554ms for pod "etcd-ha-476000-m02" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:40.880390    4178 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-476000-m03" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:40.880419    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-476000-m03
	I0926 17:54:40.880424    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.880429    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.880433    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.882094    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:40.882474    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m03
	I0926 17:54:40.882481    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:40.882486    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:40.882496    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:40.884251    4178 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 17:54:40.884613    4178 pod_ready.go:98] node "ha-476000-m03" hosting pod "etcd-ha-476000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:40.884622    4178 pod_ready.go:82] duration metric: took 4.227661ms for pod "etcd-ha-476000-m03" in "kube-system" namespace to be "Ready" ...
	E0926 17:54:40.884628    4178 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-476000-m03" hosting pod "etcd-ha-476000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:40.884638    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-476000" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:41.061560    4178 request.go:632] Waited for 176.87189ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-476000
	I0926 17:54:41.061616    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-476000
	I0926 17:54:41.061655    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:41.061670    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:41.061677    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:41.065303    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:41.262138    4178 request.go:632] Waited for 196.341694ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:41.262261    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:41.262270    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:41.262282    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:41.262290    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:41.266333    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:41.266689    4178 pod_ready.go:93] pod "kube-apiserver-ha-476000" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:41.266699    4178 pod_ready.go:82] duration metric: took 382.053003ms for pod "kube-apiserver-ha-476000" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:41.266705    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-476000-m02" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:41.460472    4178 request.go:632] Waited for 193.723597ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-476000-m02
	I0926 17:54:41.460525    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-476000-m02
	I0926 17:54:41.460535    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:41.460578    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:41.460588    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:41.464471    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:41.661359    4178 request.go:632] Waited for 196.505849ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:41.661462    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:41.661475    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:41.661486    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:41.661494    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:41.665427    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:41.665770    4178 pod_ready.go:93] pod "kube-apiserver-ha-476000-m02" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:41.665780    4178 pod_ready.go:82] duration metric: took 399.068092ms for pod "kube-apiserver-ha-476000-m02" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:41.665789    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-476000-m03" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:41.861535    4178 request.go:632] Waited for 195.701622ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-476000-m03
	I0926 17:54:41.861634    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-476000-m03
	I0926 17:54:41.861648    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:41.861668    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:41.861680    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:41.865792    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:42.061777    4178 request.go:632] Waited for 195.542882ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000-m03
	I0926 17:54:42.061836    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m03
	I0926 17:54:42.061869    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:42.061880    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:42.061888    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:42.066352    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:42.066752    4178 pod_ready.go:98] node "ha-476000-m03" hosting pod "kube-apiserver-ha-476000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:42.066763    4178 pod_ready.go:82] duration metric: took 400.967857ms for pod "kube-apiserver-ha-476000-m03" in "kube-system" namespace to be "Ready" ...
	E0926 17:54:42.066770    4178 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-476000-m03" hosting pod "kube-apiserver-ha-476000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:42.066774    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-476000" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:42.260909    4178 request.go:632] Waited for 194.055971ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-476000
	I0926 17:54:42.260962    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-476000
	I0926 17:54:42.261001    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:42.261021    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:42.261031    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:42.264905    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:42.460758    4178 request.go:632] Waited for 195.327303ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:42.460808    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:42.460816    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:42.460827    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:42.460837    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:42.464434    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:42.464776    4178 pod_ready.go:93] pod "kube-controller-manager-ha-476000" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:42.464786    4178 pod_ready.go:82] duration metric: took 398.004555ms for pod "kube-controller-manager-ha-476000" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:42.464793    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-476000-m02" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:42.660316    4178 request.go:632] Waited for 195.46211ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-476000-m02
	I0926 17:54:42.660458    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-476000-m02
	I0926 17:54:42.660474    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:42.660486    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:42.660494    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:42.665327    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:42.860122    4178 request.go:632] Waited for 194.468161ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:42.860201    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:42.860211    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:42.860222    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:42.860231    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:42.864049    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:42.864456    4178 pod_ready.go:93] pod "kube-controller-manager-ha-476000-m02" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:42.864465    4178 pod_ready.go:82] duration metric: took 399.6655ms for pod "kube-controller-manager-ha-476000-m02" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:42.864473    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-476000-m03" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:43.060814    4178 request.go:632] Waited for 196.258122ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-476000-m03
	I0926 17:54:43.060925    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-476000-m03
	I0926 17:54:43.060935    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:43.060947    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:43.060956    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:43.065088    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:43.261824    4178 request.go:632] Waited for 196.351744ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000-m03
	I0926 17:54:43.261944    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m03
	I0926 17:54:43.261957    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:43.261967    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:43.261984    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:43.266272    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:43.266738    4178 pod_ready.go:98] node "ha-476000-m03" hosting pod "kube-controller-manager-ha-476000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:43.266748    4178 pod_ready.go:82] duration metric: took 402.268136ms for pod "kube-controller-manager-ha-476000-m03" in "kube-system" namespace to be "Ready" ...
	E0926 17:54:43.266762    4178 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-476000-m03" hosting pod "kube-controller-manager-ha-476000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:43.266768    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5d8nb" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:43.460501    4178 request.go:632] Waited for 193.687301ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5d8nb
	I0926 17:54:43.460615    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5d8nb
	I0926 17:54:43.460627    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:43.460639    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:43.460647    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:43.463846    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:43.662152    4178 request.go:632] Waited for 197.799796ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000-m04
	I0926 17:54:43.662296    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m04
	I0926 17:54:43.662312    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:43.662324    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:43.662334    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:43.666430    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:43.666928    4178 pod_ready.go:98] node "ha-476000-m04" hosting pod "kube-proxy-5d8nb" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m04" has status "Ready":"Unknown"
	I0926 17:54:43.666940    4178 pod_ready.go:82] duration metric: took 400.16396ms for pod "kube-proxy-5d8nb" in "kube-system" namespace to be "Ready" ...
	E0926 17:54:43.666946    4178 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-476000-m04" hosting pod "kube-proxy-5d8nb" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m04" has status "Ready":"Unknown"
	I0926 17:54:43.666950    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bpsqv" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:43.860782    4178 request.go:632] Waited for 193.758415ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bpsqv
	I0926 17:54:43.860851    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bpsqv
	I0926 17:54:43.860893    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:43.860905    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:43.860912    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:43.865061    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:44.060850    4178 request.go:632] Waited for 195.218122ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000-m03
	I0926 17:54:44.060902    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m03
	I0926 17:54:44.060920    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:44.060968    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:44.060976    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:44.065008    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:44.065426    4178 pod_ready.go:98] node "ha-476000-m03" hosting pod "kube-proxy-bpsqv" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:44.065437    4178 pod_ready.go:82] duration metric: took 398.480723ms for pod "kube-proxy-bpsqv" in "kube-system" namespace to be "Ready" ...
	E0926 17:54:44.065443    4178 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-476000-m03" hosting pod "kube-proxy-bpsqv" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:44.065448    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ctdh4" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:44.260264    4178 request.go:632] Waited for 194.757329ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ctdh4
	I0926 17:54:44.260395    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ctdh4
	I0926 17:54:44.260404    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:44.260417    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:44.260424    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:44.264668    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:44.461295    4178 request.go:632] Waited for 196.119983ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:44.461373    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:44.461384    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:44.461399    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:44.461407    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:44.465035    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:44.465397    4178 pod_ready.go:93] pod "kube-proxy-ctdh4" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:44.465406    4178 pod_ready.go:82] duration metric: took 399.951689ms for pod "kube-proxy-ctdh4" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:44.465413    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nrsx7" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:44.660616    4178 request.go:632] Waited for 195.1575ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrsx7
	I0926 17:54:44.660704    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrsx7
	I0926 17:54:44.660715    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:44.660726    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:44.660734    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:44.664476    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:44.860447    4178 request.go:632] Waited for 195.571151ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:44.860565    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:44.860578    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:44.860588    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:44.860596    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:44.864038    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:44.864554    4178 pod_ready.go:93] pod "kube-proxy-nrsx7" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:44.864566    4178 pod_ready.go:82] duration metric: took 399.145507ms for pod "kube-proxy-nrsx7" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:44.864575    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-476000" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:45.060924    4178 request.go:632] Waited for 196.301993ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-476000
	I0926 17:54:45.061011    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-476000
	I0926 17:54:45.061022    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:45.061034    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:45.061042    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:45.065277    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:45.260098    4178 request.go:632] Waited for 194.412657ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:45.260187    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000
	I0926 17:54:45.260208    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:45.260220    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:45.260229    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:45.264296    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:45.264558    4178 pod_ready.go:93] pod "kube-scheduler-ha-476000" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:45.264567    4178 pod_ready.go:82] duration metric: took 399.984402ms for pod "kube-scheduler-ha-476000" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:45.264574    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-476000-m02" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:45.460204    4178 request.go:632] Waited for 195.586272ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-476000-m02
	I0926 17:54:45.460285    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-476000-m02
	I0926 17:54:45.460296    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:45.460307    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:45.460315    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:45.463717    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:45.661528    4178 request.go:632] Waited for 197.284014ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:45.661624    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m02
	I0926 17:54:45.661634    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:45.661645    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:45.661653    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:45.666080    4178 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 17:54:45.666323    4178 pod_ready.go:93] pod "kube-scheduler-ha-476000-m02" in "kube-system" namespace has status "Ready":"True"
	I0926 17:54:45.666333    4178 pod_ready.go:82] duration metric: took 401.752851ms for pod "kube-scheduler-ha-476000-m02" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:45.666340    4178 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-476000-m03" in "kube-system" namespace to be "Ready" ...
	I0926 17:54:45.860703    4178 request.go:632] Waited for 194.311899ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-476000-m03
	I0926 17:54:45.860736    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-476000-m03
	I0926 17:54:45.860740    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:45.860746    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:45.860750    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:45.863521    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:46.061792    4178 request.go:632] Waited for 197.829608ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-476000-m03
	I0926 17:54:46.061901    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-476000-m03
	I0926 17:54:46.061915    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:46.061926    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:46.061934    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:46.065839    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:46.066244    4178 pod_ready.go:98] node "ha-476000-m03" hosting pod "kube-scheduler-ha-476000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:46.066255    4178 pod_ready.go:82] duration metric: took 399.908641ms for pod "kube-scheduler-ha-476000-m03" in "kube-system" namespace to be "Ready" ...
	E0926 17:54:46.066262    4178 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-476000-m03" hosting pod "kube-scheduler-ha-476000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-476000-m03" has status "Ready":"Unknown"
	I0926 17:54:46.066267    4178 pod_ready.go:39] duration metric: took 41.222971189s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0926 17:54:46.066282    4178 api_server.go:52] waiting for apiserver process to appear ...
	I0926 17:54:46.066375    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 17:54:46.079414    4178 logs.go:276] 2 containers: [7aabe646a9fe 3fae3e334c04]
	I0926 17:54:46.079513    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 17:54:46.092379    4178 logs.go:276] 2 containers: [f525de12dc97 bcf266ec7564]
	I0926 17:54:46.092476    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 17:54:46.105011    4178 logs.go:276] 0 containers: []
	W0926 17:54:46.105025    4178 logs.go:278] No container was found matching "coredns"
	I0926 17:54:46.105107    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 17:54:46.118452    4178 logs.go:276] 2 containers: [d9fc15fd82f1 d8ed18933791]
	I0926 17:54:46.118550    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 17:54:46.132316    4178 logs.go:276] 2 containers: [b52376c9cdcd a5a9a7e18064]
	I0926 17:54:46.132402    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 17:54:46.145649    4178 logs.go:276] 2 containers: [350a07550ad1 f82271bde6b0]
	I0926 17:54:46.145746    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 17:54:46.160399    4178 logs.go:276] 2 containers: [b73a2fda347d 981336ad66c6]
	I0926 17:54:46.160426    4178 logs.go:123] Gathering logs for kube-apiserver [7aabe646a9fe] ...
	I0926 17:54:46.160432    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aabe646a9fe"
	I0926 17:54:46.180676    4178 logs.go:123] Gathering logs for kube-apiserver [3fae3e334c04] ...
	I0926 17:54:46.180690    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fae3e334c04"
	I0926 17:54:46.213941    4178 logs.go:123] Gathering logs for kindnet [981336ad66c6] ...
	I0926 17:54:46.213956    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 981336ad66c6"
	I0926 17:54:46.229008    4178 logs.go:123] Gathering logs for container status ...
	I0926 17:54:46.229022    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 17:54:46.263727    4178 logs.go:123] Gathering logs for dmesg ...
	I0926 17:54:46.263743    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 17:54:46.275216    4178 logs.go:123] Gathering logs for etcd [f525de12dc97] ...
	I0926 17:54:46.275229    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f525de12dc97"
	I0926 17:54:46.340546    4178 logs.go:123] Gathering logs for etcd [bcf266ec7564] ...
	I0926 17:54:46.340563    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcf266ec7564"
	I0926 17:54:46.368786    4178 logs.go:123] Gathering logs for kube-scheduler [d8ed18933791] ...
	I0926 17:54:46.368802    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ed18933791"
	I0926 17:54:46.392911    4178 logs.go:123] Gathering logs for kube-proxy [a5a9a7e18064] ...
	I0926 17:54:46.392926    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5a9a7e18064"
	I0926 17:54:46.411685    4178 logs.go:123] Gathering logs for kubelet ...
	I0926 17:54:46.411700    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 17:54:46.453572    4178 logs.go:123] Gathering logs for describe nodes ...
	I0926 17:54:46.453588    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 17:54:46.819319    4178 logs.go:123] Gathering logs for kube-scheduler [d9fc15fd82f1] ...
	I0926 17:54:46.819338    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9fc15fd82f1"
	I0926 17:54:46.834299    4178 logs.go:123] Gathering logs for kube-proxy [b52376c9cdcd] ...
	I0926 17:54:46.834315    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b52376c9cdcd"
	I0926 17:54:46.850264    4178 logs.go:123] Gathering logs for Docker ...
	I0926 17:54:46.850278    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 17:54:46.881220    4178 logs.go:123] Gathering logs for kube-controller-manager [350a07550ad1] ...
	I0926 17:54:46.881233    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 350a07550ad1"
	I0926 17:54:46.915123    4178 logs.go:123] Gathering logs for kube-controller-manager [f82271bde6b0] ...
	I0926 17:54:46.915139    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f82271bde6b0"
	I0926 17:54:46.943154    4178 logs.go:123] Gathering logs for kindnet [b73a2fda347d] ...
	I0926 17:54:46.943169    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b73a2fda347d"
	I0926 17:54:49.459929    4178 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 17:54:49.472910    4178 api_server.go:72] duration metric: took 1m9.339247453s to wait for apiserver process to appear ...
	I0926 17:54:49.472923    4178 api_server.go:88] waiting for apiserver healthz status ...
	I0926 17:54:49.473016    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 17:54:49.489783    4178 logs.go:276] 2 containers: [7aabe646a9fe 3fae3e334c04]
	I0926 17:54:49.489876    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 17:54:49.503069    4178 logs.go:276] 2 containers: [f525de12dc97 bcf266ec7564]
	I0926 17:54:49.503157    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 17:54:49.514340    4178 logs.go:276] 0 containers: []
	W0926 17:54:49.514353    4178 logs.go:278] No container was found matching "coredns"
	I0926 17:54:49.514430    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 17:54:49.528690    4178 logs.go:276] 2 containers: [d9fc15fd82f1 d8ed18933791]
	I0926 17:54:49.528782    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 17:54:49.540774    4178 logs.go:276] 2 containers: [b52376c9cdcd a5a9a7e18064]
	I0926 17:54:49.540870    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 17:54:49.553605    4178 logs.go:276] 2 containers: [350a07550ad1 f82271bde6b0]
	I0926 17:54:49.553693    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 17:54:49.566939    4178 logs.go:276] 2 containers: [b73a2fda347d 981336ad66c6]
	I0926 17:54:49.566961    4178 logs.go:123] Gathering logs for kindnet [981336ad66c6] ...
	I0926 17:54:49.566967    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 981336ad66c6"
	I0926 17:54:49.584163    4178 logs.go:123] Gathering logs for etcd [bcf266ec7564] ...
	I0926 17:54:49.584179    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcf266ec7564"
	I0926 17:54:49.608092    4178 logs.go:123] Gathering logs for kube-controller-manager [f82271bde6b0] ...
	I0926 17:54:49.608107    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f82271bde6b0"
	I0926 17:54:49.640526    4178 logs.go:123] Gathering logs for etcd [f525de12dc97] ...
	I0926 17:54:49.640542    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f525de12dc97"
	I0926 17:54:49.707920    4178 logs.go:123] Gathering logs for kube-scheduler [d9fc15fd82f1] ...
	I0926 17:54:49.707937    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9fc15fd82f1"
	I0926 17:54:49.725537    4178 logs.go:123] Gathering logs for kube-proxy [b52376c9cdcd] ...
	I0926 17:54:49.725551    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b52376c9cdcd"
	I0926 17:54:49.747118    4178 logs.go:123] Gathering logs for kube-proxy [a5a9a7e18064] ...
	I0926 17:54:49.747134    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5a9a7e18064"
	I0926 17:54:49.763059    4178 logs.go:123] Gathering logs for kindnet [b73a2fda347d] ...
	I0926 17:54:49.763073    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b73a2fda347d"
	I0926 17:54:49.780606    4178 logs.go:123] Gathering logs for container status ...
	I0926 17:54:49.780619    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 17:54:49.815474    4178 logs.go:123] Gathering logs for kubelet ...
	I0926 17:54:49.815490    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 17:54:49.856341    4178 logs.go:123] Gathering logs for kube-apiserver [3fae3e334c04] ...
	I0926 17:54:49.856359    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fae3e334c04"
	I0926 17:54:49.895001    4178 logs.go:123] Gathering logs for kube-apiserver [7aabe646a9fe] ...
	I0926 17:54:49.895016    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aabe646a9fe"
	I0926 17:54:49.915291    4178 logs.go:123] Gathering logs for kube-scheduler [d8ed18933791] ...
	I0926 17:54:49.915307    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ed18933791"
	I0926 17:54:49.931682    4178 logs.go:123] Gathering logs for kube-controller-manager [350a07550ad1] ...
	I0926 17:54:49.931698    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 350a07550ad1"
	I0926 17:54:49.962905    4178 logs.go:123] Gathering logs for Docker ...
	I0926 17:54:49.962920    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 17:54:49.995739    4178 logs.go:123] Gathering logs for dmesg ...
	I0926 17:54:49.995756    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 17:54:50.006748    4178 logs.go:123] Gathering logs for describe nodes ...
	I0926 17:54:50.006764    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 17:54:52.683223    4178 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0926 17:54:52.688111    4178 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0926 17:54:52.688148    4178 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0926 17:54:52.688152    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:52.688158    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:52.688162    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:52.688774    4178 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0926 17:54:52.688866    4178 api_server.go:141] control plane version: v1.31.1
	I0926 17:54:52.688877    4178 api_server.go:131] duration metric: took 3.215937625s to wait for apiserver health ...
	I0926 17:54:52.688882    4178 system_pods.go:43] waiting for kube-system pods to appear ...
	I0926 17:54:52.688964    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 17:54:52.702208    4178 logs.go:276] 2 containers: [7aabe646a9fe 3fae3e334c04]
	I0926 17:54:52.702296    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 17:54:52.716057    4178 logs.go:276] 2 containers: [f525de12dc97 bcf266ec7564]
	I0926 17:54:52.716146    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 17:54:52.730288    4178 logs.go:276] 0 containers: []
	W0926 17:54:52.730303    4178 logs.go:278] No container was found matching "coredns"
	I0926 17:54:52.730387    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 17:54:52.744133    4178 logs.go:276] 2 containers: [d9fc15fd82f1 d8ed18933791]
	I0926 17:54:52.744229    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 17:54:52.757357    4178 logs.go:276] 2 containers: [b52376c9cdcd a5a9a7e18064]
	I0926 17:54:52.757447    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 17:54:52.770397    4178 logs.go:276] 2 containers: [350a07550ad1 f82271bde6b0]
	I0926 17:54:52.770488    4178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 17:54:52.783588    4178 logs.go:276] 2 containers: [b73a2fda347d 981336ad66c6]
	I0926 17:54:52.783609    4178 logs.go:123] Gathering logs for dmesg ...
	I0926 17:54:52.783615    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 17:54:52.794149    4178 logs.go:123] Gathering logs for kube-scheduler [d9fc15fd82f1] ...
	I0926 17:54:52.794162    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9fc15fd82f1"
	I0926 17:54:52.810239    4178 logs.go:123] Gathering logs for kube-proxy [a5a9a7e18064] ...
	I0926 17:54:52.810253    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5a9a7e18064"
	I0926 17:54:52.828364    4178 logs.go:123] Gathering logs for kube-controller-manager [350a07550ad1] ...
	I0926 17:54:52.828379    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 350a07550ad1"
	I0926 17:54:52.859712    4178 logs.go:123] Gathering logs for kindnet [b73a2fda347d] ...
	I0926 17:54:52.859726    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b73a2fda347d"
	I0926 17:54:52.877881    4178 logs.go:123] Gathering logs for container status ...
	I0926 17:54:52.877898    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 17:54:52.920788    4178 logs.go:123] Gathering logs for kube-scheduler [d8ed18933791] ...
	I0926 17:54:52.920802    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ed18933791"
	I0926 17:54:52.937686    4178 logs.go:123] Gathering logs for Docker ...
	I0926 17:54:52.937700    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 17:54:52.970435    4178 logs.go:123] Gathering logs for kubelet ...
	I0926 17:54:52.970449    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 17:54:53.015652    4178 logs.go:123] Gathering logs for describe nodes ...
	I0926 17:54:53.015669    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 17:54:53.184377    4178 logs.go:123] Gathering logs for etcd [f525de12dc97] ...
	I0926 17:54:53.184391    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f525de12dc97"
	I0926 17:54:53.249067    4178 logs.go:123] Gathering logs for etcd [bcf266ec7564] ...
	I0926 17:54:53.249083    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcf266ec7564"
	I0926 17:54:53.274003    4178 logs.go:123] Gathering logs for kube-controller-manager [f82271bde6b0] ...
	I0926 17:54:53.274019    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f82271bde6b0"
	I0926 17:54:53.300047    4178 logs.go:123] Gathering logs for kube-apiserver [7aabe646a9fe] ...
	I0926 17:54:53.300062    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aabe646a9fe"
	I0926 17:54:53.321481    4178 logs.go:123] Gathering logs for kube-apiserver [3fae3e334c04] ...
	I0926 17:54:53.321495    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fae3e334c04"
	I0926 17:54:53.356023    4178 logs.go:123] Gathering logs for kube-proxy [b52376c9cdcd] ...
	I0926 17:54:53.356038    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b52376c9cdcd"
	I0926 17:54:53.374219    4178 logs.go:123] Gathering logs for kindnet [981336ad66c6] ...
	I0926 17:54:53.374233    4178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 981336ad66c6"
	I0926 17:54:55.893460    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0926 17:54:55.893486    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:55.893529    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:55.893539    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:55.899854    4178 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0926 17:54:55.904904    4178 system_pods.go:59] 26 kube-system pods found
	I0926 17:54:55.904920    4178 system_pods.go:61] "coredns-7c65d6cfc9-44l9n" [8009053f-fc43-43ba-a44e-fd1d53e88617] Running
	I0926 17:54:55.904925    4178 system_pods.go:61] "coredns-7c65d6cfc9-7jwgv" [20fb38a0-b993-41b3-9c91-98802d75ef47] Running
	I0926 17:54:55.904928    4178 system_pods.go:61] "etcd-ha-476000" [469cfe32-fa29-4816-afd7-3f115a3a6c67] Running
	I0926 17:54:55.904930    4178 system_pods.go:61] "etcd-ha-476000-m02" [d4c3ec0c-a151-4047-9a30-93cae10d24a0] Running
	I0926 17:54:55.904933    4178 system_pods.go:61] "etcd-ha-476000-m03" [e9166a94-af57-49ec-91a3-dc0c36083ca4] Running
	I0926 17:54:55.904936    4178 system_pods.go:61] "kindnet-44vxl" [488a3806-d7c1-4397-bff8-00d9ea3cb984] Running
	I0926 17:54:55.904938    4178 system_pods.go:61] "kindnet-4pnxr" [23b10d5b-fd63-4284-9c44-413b8bf80354] Running
	I0926 17:54:55.904941    4178 system_pods.go:61] "kindnet-hhrtc" [7ac457e3-ae85-4314-99dc-da37f5875807] Running
	I0926 17:54:55.904943    4178 system_pods.go:61] "kindnet-lgj66" [63fc5d71-c403-40d7-85a7-6ff48e307a79] Running
	I0926 17:54:55.904946    4178 system_pods.go:61] "kube-apiserver-ha-476000" [5c5e9fb4-0cda-4892-bae5-e51e839d2573] Running
	I0926 17:54:55.904948    4178 system_pods.go:61] "kube-apiserver-ha-476000-m02" [2d6d0355-152b-4be4-b810-f1c80c9ef24f] Running
	I0926 17:54:55.904951    4178 system_pods.go:61] "kube-apiserver-ha-476000-m03" [3ce79eb3-635c-4632-82e0-7240a7d49c50] Running
	I0926 17:54:55.904954    4178 system_pods.go:61] "kube-controller-manager-ha-476000" [9edef370-886e-4260-a3d3-99be564d90c6] Running
	I0926 17:54:55.904957    4178 system_pods.go:61] "kube-controller-manager-ha-476000-m02" [5eeed951-eaba-436f-9f80-8f68cb50425d] Running
	I0926 17:54:55.904960    4178 system_pods.go:61] "kube-controller-manager-ha-476000-m03" [e60f68a5-a353-4501-81c1-ac7d76820373] Running
	I0926 17:54:55.904962    4178 system_pods.go:61] "kube-proxy-5d8nb" [1e9c0178-2d58-4ca1-9fbe-c8e54d91bf1a] Running
	I0926 17:54:55.904965    4178 system_pods.go:61] "kube-proxy-bpsqv" [c264b112-c037-4332-9c47-2561ecb928f1] Running
	I0926 17:54:55.904967    4178 system_pods.go:61] "kube-proxy-ctdh4" [9a21a19a-5cad-4eb9-97f3-4c654fbbe59b] Running
	I0926 17:54:55.904970    4178 system_pods.go:61] "kube-proxy-nrsx7" [14b031d0-b044-4500-8cc1-1397c83c1886] Running
	I0926 17:54:55.904973    4178 system_pods.go:61] "kube-scheduler-ha-476000" [ef0e308c-1b28-413c-846e-24935489434d] Running
	I0926 17:54:55.904976    4178 system_pods.go:61] "kube-scheduler-ha-476000-m02" [2b75a7bc-a19c-4aeb-8521-10bd963cacbb] Running
	I0926 17:54:55.904978    4178 system_pods.go:61] "kube-scheduler-ha-476000-m03" [206cbf3f-bff1-4164-a622-3be7593c08ac] Running
	I0926 17:54:55.904981    4178 system_pods.go:61] "kube-vip-ha-476000" [376a9128-fe3c-41aa-b26c-da921ae20e68] Running
	I0926 17:54:55.904997    4178 system_pods.go:61] "kube-vip-ha-476000-m02" [7e907e74-7f15-4c04-b463-682a14766e66] Running
	I0926 17:54:55.905002    4178 system_pods.go:61] "kube-vip-ha-476000-m03" [bf2e3b8c-cf42-45b0-a4d7-a32b820bab21] Running
	I0926 17:54:55.905005    4178 system_pods.go:61] "storage-provisioner" [e3e367a7-6cda-4177-a81d-7897333308d7] Running
	I0926 17:54:55.905009    4178 system_pods.go:74] duration metric: took 3.216111125s to wait for pod list to return data ...
	I0926 17:54:55.905015    4178 default_sa.go:34] waiting for default service account to be created ...
	I0926 17:54:55.905062    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0926 17:54:55.905068    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:55.905073    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:55.905077    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:55.907842    4178 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 17:54:55.908016    4178 default_sa.go:45] found service account: "default"
	I0926 17:54:55.908026    4178 default_sa.go:55] duration metric: took 3.006211ms for default service account to be created ...
	I0926 17:54:55.908031    4178 system_pods.go:116] waiting for k8s-apps to be running ...
	I0926 17:54:55.908061    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0926 17:54:55.908066    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:55.908071    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:55.908075    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:55.912026    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:55.917054    4178 system_pods.go:86] 26 kube-system pods found
	I0926 17:54:55.917066    4178 system_pods.go:89] "coredns-7c65d6cfc9-44l9n" [8009053f-fc43-43ba-a44e-fd1d53e88617] Running
	I0926 17:54:55.917070    4178 system_pods.go:89] "coredns-7c65d6cfc9-7jwgv" [20fb38a0-b993-41b3-9c91-98802d75ef47] Running
	I0926 17:54:55.917073    4178 system_pods.go:89] "etcd-ha-476000" [469cfe32-fa29-4816-afd7-3f115a3a6c67] Running
	I0926 17:54:55.917076    4178 system_pods.go:89] "etcd-ha-476000-m02" [d4c3ec0c-a151-4047-9a30-93cae10d24a0] Running
	I0926 17:54:55.917080    4178 system_pods.go:89] "etcd-ha-476000-m03" [e9166a94-af57-49ec-91a3-dc0c36083ca4] Running
	I0926 17:54:55.917083    4178 system_pods.go:89] "kindnet-44vxl" [488a3806-d7c1-4397-bff8-00d9ea3cb984] Running
	I0926 17:54:55.917085    4178 system_pods.go:89] "kindnet-4pnxr" [23b10d5b-fd63-4284-9c44-413b8bf80354] Running
	I0926 17:54:55.917088    4178 system_pods.go:89] "kindnet-hhrtc" [7ac457e3-ae85-4314-99dc-da37f5875807] Running
	I0926 17:54:55.917091    4178 system_pods.go:89] "kindnet-lgj66" [63fc5d71-c403-40d7-85a7-6ff48e307a79] Running
	I0926 17:54:55.917094    4178 system_pods.go:89] "kube-apiserver-ha-476000" [5c5e9fb4-0cda-4892-bae5-e51e839d2573] Running
	I0926 17:54:55.917097    4178 system_pods.go:89] "kube-apiserver-ha-476000-m02" [2d6d0355-152b-4be4-b810-f1c80c9ef24f] Running
	I0926 17:54:55.917100    4178 system_pods.go:89] "kube-apiserver-ha-476000-m03" [3ce79eb3-635c-4632-82e0-7240a7d49c50] Running
	I0926 17:54:55.917103    4178 system_pods.go:89] "kube-controller-manager-ha-476000" [9edef370-886e-4260-a3d3-99be564d90c6] Running
	I0926 17:54:55.917106    4178 system_pods.go:89] "kube-controller-manager-ha-476000-m02" [5eeed951-eaba-436f-9f80-8f68cb50425d] Running
	I0926 17:54:55.917110    4178 system_pods.go:89] "kube-controller-manager-ha-476000-m03" [e60f68a5-a353-4501-81c1-ac7d76820373] Running
	I0926 17:54:55.917113    4178 system_pods.go:89] "kube-proxy-5d8nb" [1e9c0178-2d58-4ca1-9fbe-c8e54d91bf1a] Running
	I0926 17:54:55.917116    4178 system_pods.go:89] "kube-proxy-bpsqv" [c264b112-c037-4332-9c47-2561ecb928f1] Running
	I0926 17:54:55.917123    4178 system_pods.go:89] "kube-proxy-ctdh4" [9a21a19a-5cad-4eb9-97f3-4c654fbbe59b] Running
	I0926 17:54:55.917126    4178 system_pods.go:89] "kube-proxy-nrsx7" [14b031d0-b044-4500-8cc1-1397c83c1886] Running
	I0926 17:54:55.917129    4178 system_pods.go:89] "kube-scheduler-ha-476000" [ef0e308c-1b28-413c-846e-24935489434d] Running
	I0926 17:54:55.917132    4178 system_pods.go:89] "kube-scheduler-ha-476000-m02" [2b75a7bc-a19c-4aeb-8521-10bd963cacbb] Running
	I0926 17:54:55.917135    4178 system_pods.go:89] "kube-scheduler-ha-476000-m03" [206cbf3f-bff1-4164-a622-3be7593c08ac] Running
	I0926 17:54:55.917138    4178 system_pods.go:89] "kube-vip-ha-476000" [376a9128-fe3c-41aa-b26c-da921ae20e68] Running
	I0926 17:54:55.917140    4178 system_pods.go:89] "kube-vip-ha-476000-m02" [7e907e74-7f15-4c04-b463-682a14766e66] Running
	I0926 17:54:55.917144    4178 system_pods.go:89] "kube-vip-ha-476000-m03" [bf2e3b8c-cf42-45b0-a4d7-a32b820bab21] Running
	I0926 17:54:55.917146    4178 system_pods.go:89] "storage-provisioner" [e3e367a7-6cda-4177-a81d-7897333308d7] Running
	I0926 17:54:55.917151    4178 system_pods.go:126] duration metric: took 9.116472ms to wait for k8s-apps to be running ...
	I0926 17:54:55.917160    4178 system_svc.go:44] waiting for kubelet service to be running ....
	I0926 17:54:55.917225    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 17:54:55.928854    4178 system_svc.go:56] duration metric: took 11.69353ms WaitForService to wait for kubelet
	I0926 17:54:55.928867    4178 kubeadm.go:582] duration metric: took 1m15.795183486s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 17:54:55.928878    4178 node_conditions.go:102] verifying NodePressure condition ...
	I0926 17:54:55.928918    4178 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0926 17:54:55.928924    4178 round_trippers.go:469] Request Headers:
	I0926 17:54:55.928930    4178 round_trippers.go:473]     Accept: application/json, */*
	I0926 17:54:55.928933    4178 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 17:54:55.932146    4178 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 17:54:55.933143    4178 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0926 17:54:55.933159    4178 node_conditions.go:123] node cpu capacity is 2
	I0926 17:54:55.933173    4178 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0926 17:54:55.933176    4178 node_conditions.go:123] node cpu capacity is 2
	I0926 17:54:55.933181    4178 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0926 17:54:55.933183    4178 node_conditions.go:123] node cpu capacity is 2
	I0926 17:54:55.933186    4178 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0926 17:54:55.933190    4178 node_conditions.go:123] node cpu capacity is 2
	I0926 17:54:55.933193    4178 node_conditions.go:105] duration metric: took 4.311525ms to run NodePressure ...
	I0926 17:54:55.933202    4178 start.go:241] waiting for startup goroutines ...
	I0926 17:54:55.933219    4178 start.go:255] writing updated cluster config ...
	I0926 17:54:55.954947    4178 out.go:201] 
	I0926 17:54:55.975717    4178 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:54:55.975787    4178 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/config.json ...
	I0926 17:54:55.997338    4178 out.go:177] * Starting "ha-476000-m03" control-plane node in "ha-476000" cluster
	I0926 17:54:56.055744    4178 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 17:54:56.055778    4178 cache.go:56] Caching tarball of preloaded images
	I0926 17:54:56.056007    4178 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0926 17:54:56.056029    4178 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 17:54:56.056173    4178 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/config.json ...
	I0926 17:54:56.057121    4178 start.go:360] acquireMachinesLock for ha-476000-m03: {Name:mk62b0ec9b6170d1971239573141a625c4a2621e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 17:54:56.057290    4178 start.go:364] duration metric: took 139.967µs to acquireMachinesLock for "ha-476000-m03"
	I0926 17:54:56.057321    4178 start.go:96] Skipping create...Using existing machine configuration
	I0926 17:54:56.057331    4178 fix.go:54] fixHost starting: m03
	I0926 17:54:56.057738    4178 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:54:56.057766    4178 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:54:56.066973    4178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52106
	I0926 17:54:56.067348    4178 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:54:56.067691    4178 main.go:141] libmachine: Using API Version  1
	I0926 17:54:56.067705    4178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:54:56.067918    4178 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:54:56.068036    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	I0926 17:54:56.068122    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetState
	I0926 17:54:56.068201    4178 main.go:141] libmachine: (ha-476000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:54:56.068289    4178 main.go:141] libmachine: (ha-476000-m03) DBG | hyperkit pid from json: 3537
	I0926 17:54:56.069219    4178 main.go:141] libmachine: (ha-476000-m03) DBG | hyperkit pid 3537 missing from process table
	I0926 17:54:56.069237    4178 fix.go:112] recreateIfNeeded on ha-476000-m03: state=Stopped err=<nil>
	I0926 17:54:56.069245    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	W0926 17:54:56.069331    4178 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 17:54:56.090482    4178 out.go:177] * Restarting existing hyperkit VM for "ha-476000-m03" ...
	I0926 17:54:56.132629    4178 main.go:141] libmachine: (ha-476000-m03) Calling .Start
	I0926 17:54:56.132887    4178 main.go:141] libmachine: (ha-476000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:54:56.132957    4178 main.go:141] libmachine: (ha-476000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/hyperkit.pid
	I0926 17:54:56.134746    4178 main.go:141] libmachine: (ha-476000-m03) DBG | hyperkit pid 3537 missing from process table
	I0926 17:54:56.134764    4178 main.go:141] libmachine: (ha-476000-m03) DBG | pid 3537 is in state "Stopped"
	I0926 17:54:56.134782    4178 main.go:141] libmachine: (ha-476000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/hyperkit.pid...
	I0926 17:54:56.135225    4178 main.go:141] libmachine: (ha-476000-m03) DBG | Using UUID 91a51069-a363-4c64-acd8-a07fa14dbb0d
	I0926 17:54:56.162007    4178 main.go:141] libmachine: (ha-476000-m03) DBG | Generated MAC 66:6f:5a:2d:e2:16
	I0926 17:54:56.162027    4178 main.go:141] libmachine: (ha-476000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000
	I0926 17:54:56.162143    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"91a51069-a363-4c64-acd8-a07fa14dbb0d", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003accc0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0926 17:54:56.162181    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"91a51069-a363-4c64-acd8-a07fa14dbb0d", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003accc0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0926 17:54:56.162253    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "91a51069-a363-4c64-acd8-a07fa14dbb0d", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/ha-476000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/bzimage,/Users/jenkins/minikube-integration/19711-1128/.minikube/machine
s/ha-476000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000"}
	I0926 17:54:56.162300    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 91a51069-a363-4c64-acd8-a07fa14dbb0d -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/ha-476000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/bzimage,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-476000"
	I0926 17:54:56.162312    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0926 17:54:56.163637    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 DEBUG: hyperkit: Pid is 4226
	I0926 17:54:56.164043    4178 main.go:141] libmachine: (ha-476000-m03) DBG | Attempt 0
	I0926 17:54:56.164071    4178 main.go:141] libmachine: (ha-476000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:54:56.164140    4178 main.go:141] libmachine: (ha-476000-m03) DBG | hyperkit pid from json: 4226
	I0926 17:54:56.166126    4178 main.go:141] libmachine: (ha-476000-m03) DBG | Searching for 66:6f:5a:2d:e2:16 in /var/db/dhcpd_leases ...
	I0926 17:54:56.166206    4178 main.go:141] libmachine: (ha-476000-m03) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0926 17:54:56.166235    4178 main.go:141] libmachine: (ha-476000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9e:5:36:80:93:e3 ID:1,9e:5:36:80:93:e3 Lease:0x66f75389}
	I0926 17:54:56.166254    4178 main.go:141] libmachine: (ha-476000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:96:a2:4a:f3:be:4a ID:1,96:a2:4a:f3:be:4a Lease:0x66f75376}
	I0926 17:54:56.166288    4178 main.go:141] libmachine: (ha-476000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:d6:ac:84:6b:65:3b ID:1,d6:ac:84:6b:65:3b Lease:0x66f6009b}
	I0926 17:54:56.166308    4178 main.go:141] libmachine: (ha-476000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:66:6f:5a:2d:e2:16 ID:1,66:6f:5a:2d:e2:16 Lease:0x66f7515c}
	I0926 17:54:56.166318    4178 main.go:141] libmachine: (ha-476000-m03) DBG | Found match: 66:6f:5a:2d:e2:16
	I0926 17:54:56.166327    4178 main.go:141] libmachine: (ha-476000-m03) DBG | IP: 192.169.0.7
	I0926 17:54:56.166332    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetConfigRaw
	I0926 17:54:56.166976    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetIP
	I0926 17:54:56.167202    4178 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/ha-476000/config.json ...
	I0926 17:54:56.167675    4178 machine.go:93] provisionDockerMachine start ...
	I0926 17:54:56.167686    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	I0926 17:54:56.167814    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:54:56.167961    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:54:56.168088    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:54:56.168207    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:54:56.168321    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:54:56.168450    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:54:56.168613    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0926 17:54:56.168622    4178 main.go:141] libmachine: About to run SSH command:
	hostname
	I0926 17:54:56.172038    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I0926 17:54:56.180188    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0926 17:54:56.181229    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 17:54:56.181258    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 17:54:56.181274    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 17:54:56.181290    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 17:54:56.563523    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0926 17:54:56.563541    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0926 17:54:56.678338    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 17:54:56.678355    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 17:54:56.678363    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 17:54:56.678373    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 17:54:56.679203    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0926 17:54:56.679212    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:54:56 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0926 17:55:02.300815    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:55:02 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0926 17:55:02.300833    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:55:02 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0926 17:55:02.300855    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:55:02 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0926 17:55:02.325228    4178 main.go:141] libmachine: (ha-476000-m03) DBG | 2024/09/26 17:55:02 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0926 17:55:31.235618    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0926 17:55:31.235633    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetMachineName
	I0926 17:55:31.235773    4178 buildroot.go:166] provisioning hostname "ha-476000-m03"
	I0926 17:55:31.235783    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetMachineName
	I0926 17:55:31.235886    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:31.235992    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:31.236097    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.236189    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.236274    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:31.236414    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:55:31.236550    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0926 17:55:31.236559    4178 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-476000-m03 && echo "ha-476000-m03" | sudo tee /etc/hostname
	I0926 17:55:31.305642    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-476000-m03
	
	I0926 17:55:31.305657    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:31.305790    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:31.305908    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.306006    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.306089    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:31.306235    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:55:31.306383    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0926 17:55:31.306394    4178 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-476000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-476000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-476000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0926 17:55:31.369873    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 17:55:31.369889    4178 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19711-1128/.minikube CaCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19711-1128/.minikube}
	I0926 17:55:31.369903    4178 buildroot.go:174] setting up certificates
	I0926 17:55:31.369909    4178 provision.go:84] configureAuth start
	I0926 17:55:31.369916    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetMachineName
	I0926 17:55:31.370048    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetIP
	I0926 17:55:31.370147    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:31.370234    4178 provision.go:143] copyHostCerts
	I0926 17:55:31.370268    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem
	I0926 17:55:31.370317    4178 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem, removing ...
	I0926 17:55:31.370322    4178 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem
	I0926 17:55:31.370451    4178 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem (1123 bytes)
	I0926 17:55:31.370647    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem
	I0926 17:55:31.370676    4178 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem, removing ...
	I0926 17:55:31.370680    4178 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem
	I0926 17:55:31.370748    4178 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem (1675 bytes)
	I0926 17:55:31.370903    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem
	I0926 17:55:31.370932    4178 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem, removing ...
	I0926 17:55:31.370937    4178 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem
	I0926 17:55:31.371006    4178 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem (1082 bytes)
	I0926 17:55:31.371150    4178 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem org=jenkins.ha-476000-m03 san=[127.0.0.1 192.169.0.7 ha-476000-m03 localhost minikube]
	I0926 17:55:31.544988    4178 provision.go:177] copyRemoteCerts
	I0926 17:55:31.545045    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0926 17:55:31.545059    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:31.545196    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:31.545298    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.545402    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:31.545491    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/id_rsa Username:docker}
	I0926 17:55:31.580851    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0926 17:55:31.580928    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0926 17:55:31.601357    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0926 17:55:31.601440    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0926 17:55:31.621840    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0926 17:55:31.621921    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0926 17:55:31.641722    4178 provision.go:87] duration metric: took 271.803372ms to configureAuth
	I0926 17:55:31.641736    4178 buildroot.go:189] setting minikube options for container-runtime
	I0926 17:55:31.641909    4178 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:55:31.641923    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	I0926 17:55:31.642055    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:31.642148    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:31.642236    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.642329    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.642416    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:31.642531    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:55:31.642652    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0926 17:55:31.642659    4178 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0926 17:55:31.699187    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0926 17:55:31.699200    4178 buildroot.go:70] root file system type: tmpfs
	I0926 17:55:31.699283    4178 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0926 17:55:31.699296    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:31.699424    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:31.699525    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.699630    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.699725    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:31.699863    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:55:31.700007    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0926 17:55:31.700056    4178 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0926 17:55:31.769790    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0926 17:55:31.769808    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:31.769942    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:31.770041    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.770127    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:31.770216    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:31.770341    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:55:31.770484    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0926 17:55:31.770496    4178 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0926 17:55:33.400017    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0926 17:55:33.400032    4178 machine.go:96] duration metric: took 37.232210795s to provisionDockerMachine
	I0926 17:55:33.400040    4178 start.go:293] postStartSetup for "ha-476000-m03" (driver="hyperkit")
	I0926 17:55:33.400054    4178 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0926 17:55:33.400067    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	I0926 17:55:33.400257    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0926 17:55:33.400271    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:33.400365    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:33.400451    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:33.400540    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:33.400615    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/id_rsa Username:docker}
	I0926 17:55:33.437533    4178 ssh_runner.go:195] Run: cat /etc/os-release
	I0926 17:55:33.440663    4178 info.go:137] Remote host: Buildroot 2023.02.9
	I0926 17:55:33.440673    4178 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1128/.minikube/addons for local assets ...
	I0926 17:55:33.440763    4178 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1128/.minikube/files for local assets ...
	I0926 17:55:33.440901    4178 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> 16792.pem in /etc/ssl/certs
	I0926 17:55:33.440910    4178 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> /etc/ssl/certs/16792.pem
	I0926 17:55:33.441066    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0926 17:55:33.449179    4178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem --> /etc/ssl/certs/16792.pem (1708 bytes)
	I0926 17:55:33.469328    4178 start.go:296] duration metric: took 69.278399ms for postStartSetup
	I0926 17:55:33.469350    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	I0926 17:55:33.469543    4178 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0926 17:55:33.469556    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:33.469645    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:33.469723    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:33.469812    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:33.469885    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/id_rsa Username:docker}
	I0926 17:55:33.505216    4178 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0926 17:55:33.505294    4178 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0926 17:55:33.540120    4178 fix.go:56] duration metric: took 37.482649135s for fixHost
	I0926 17:55:33.540150    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:33.540287    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:33.540382    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:33.540461    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:33.540540    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:33.540677    4178 main.go:141] libmachine: Using SSH client type: native
	I0926 17:55:33.540816    4178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d83d00] 0x5d869e0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0926 17:55:33.540823    4178 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0926 17:55:33.598810    4178 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727398533.714160628
	
	I0926 17:55:33.598825    4178 fix.go:216] guest clock: 1727398533.714160628
	I0926 17:55:33.598831    4178 fix.go:229] Guest: 2024-09-26 17:55:33.714160628 -0700 PDT Remote: 2024-09-26 17:55:33.540136 -0700 PDT m=+153.107512249 (delta=174.024628ms)
	I0926 17:55:33.598841    4178 fix.go:200] guest clock delta is within tolerance: 174.024628ms
	I0926 17:55:33.598846    4178 start.go:83] releasing machines lock for "ha-476000-m03", held for 37.541403544s
	I0926 17:55:33.598861    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	I0926 17:55:33.598984    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetIP
	I0926 17:55:33.620720    4178 out.go:177] * Found network options:
	I0926 17:55:33.640782    4178 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0926 17:55:33.662722    4178 proxy.go:119] fail to check proxy env: Error ip not in block
	W0926 17:55:33.662755    4178 proxy.go:119] fail to check proxy env: Error ip not in block
	I0926 17:55:33.662789    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	I0926 17:55:33.663752    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	I0926 17:55:33.664030    4178 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	I0926 17:55:33.664220    4178 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0926 17:55:33.664265    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	W0926 17:55:33.664303    4178 proxy.go:119] fail to check proxy env: Error ip not in block
	W0926 17:55:33.664331    4178 proxy.go:119] fail to check proxy env: Error ip not in block
	I0926 17:55:33.664429    4178 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0926 17:55:33.664449    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:55:33.664488    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:33.664703    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:33.664719    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:55:33.664903    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:33.664932    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:55:33.665066    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/id_rsa Username:docker}
	I0926 17:55:33.665091    4178 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:55:33.665207    4178 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/id_rsa Username:docker}
	W0926 17:55:33.697895    4178 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0926 17:55:33.697966    4178 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0926 17:55:33.748934    4178 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0926 17:55:33.748959    4178 start.go:495] detecting cgroup driver to use...
	I0926 17:55:33.749065    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 17:55:33.765581    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0926 17:55:33.775502    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0926 17:55:33.785025    4178 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0926 17:55:33.785083    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0926 17:55:33.794919    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 17:55:33.804605    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0926 17:55:33.814324    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 17:55:33.824237    4178 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0926 17:55:33.832956    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0926 17:55:33.841773    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0926 17:55:33.851179    4178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0926 17:55:33.860818    4178 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0926 17:55:33.869929    4178 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0926 17:55:33.870002    4178 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0926 17:55:33.880612    4178 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0926 17:55:33.888804    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:55:33.989453    4178 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0926 17:55:34.008589    4178 start.go:495] detecting cgroup driver to use...
	I0926 17:55:34.008666    4178 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0926 17:55:34.033408    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 17:55:34.045976    4178 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0926 17:55:34.061768    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 17:55:34.072236    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 17:55:34.082936    4178 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0926 17:55:34.101453    4178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 17:55:34.111855    4178 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 17:55:34.126151    4178 ssh_runner.go:195] Run: which cri-dockerd
	I0926 17:55:34.129207    4178 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0926 17:55:34.136448    4178 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0926 17:55:34.149966    4178 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0926 17:55:34.247760    4178 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0926 17:55:34.364359    4178 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0926 17:55:34.364382    4178 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0926 17:55:34.380269    4178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:55:34.475811    4178 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0926 17:56:35.519197    4178 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.04314195s)
	I0926 17:56:35.519276    4178 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0926 17:56:35.552893    4178 out.go:201] 
	W0926 17:56:35.574257    4178 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 27 00:55:31 ha-476000-m03 systemd[1]: Starting Docker Application Container Engine...
	Sep 27 00:55:31 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:31.500016553Z" level=info msg="Starting up"
	Sep 27 00:55:31 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:31.500635723Z" level=info msg="containerd not running, starting managed containerd"
	Sep 27 00:55:31 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:31.501585462Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=510
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.515859502Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.530811327Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.530896497Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.530963742Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.530999016Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.531160593Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.531211393Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.531353040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.531394128Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.531431029Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.531461249Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.531611451Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.531854923Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.533401951Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.533446517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.533570107Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.533614884Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.533785548Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.533833312Z" level=info msg="metadata content store policy set" policy=shared
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537372044Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537425387Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537458961Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537519539Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537555242Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537622818Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537842730Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537922428Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537957588Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.537987448Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538017362Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538049217Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538078685Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538107984Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538137843Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538167077Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538198997Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538230397Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538266484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538296944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538326105Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538358875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538390741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538420029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538458203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538495889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538528790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538561681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538590379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538618723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538647795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538678724Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538713636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538743343Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538771404Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538879453Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538923135Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.538973990Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.539015313Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.539070453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.539103724Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.539133731Z" level=info msg="NRI interface is disabled by configuration."
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.539314481Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.539398768Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.539457208Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 27 00:55:31 ha-476000-m03 dockerd[510]: time="2024-09-27T00:55:31.539540620Z" level=info msg="containerd successfully booted in 0.024310s"
	Sep 27 00:55:32 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:32.523809928Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 27 00:55:32 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:32.557923590Z" level=info msg="Loading containers: start."
	Sep 27 00:55:32 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:32.687864975Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 27 00:55:32 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:32.754261548Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 27 00:55:33 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:33.488464069Z" level=info msg="Loading containers: done."
	Sep 27 00:55:33 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:33.495297411Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Sep 27 00:55:33 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:33.495333206Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Sep 27 00:55:33 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:33.495348892Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Sep 27 00:55:33 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:33.495450205Z" level=info msg="Daemon has completed initialization"
	Sep 27 00:55:33 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:33.514076327Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 27 00:55:33 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:33.514159018Z" level=info msg="API listen on [::]:2376"
	Sep 27 00:55:33 ha-476000-m03 systemd[1]: Started Docker Application Container Engine.
	Sep 27 00:55:34 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:34.603579868Z" level=info msg="Processing signal 'terminated'"
	Sep 27 00:55:34 ha-476000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Sep 27 00:55:34 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:34.604826953Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 27 00:55:34 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:34.605154827Z" level=info msg="Daemon shutdown complete"
	Sep 27 00:55:34 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:34.605194895Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 27 00:55:34 ha-476000-m03 dockerd[503]: time="2024-09-27T00:55:34.605243671Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 27 00:55:35 ha-476000-m03 systemd[1]: docker.service: Deactivated successfully.
	Sep 27 00:55:35 ha-476000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Sep 27 00:55:35 ha-476000-m03 systemd[1]: Starting Docker Application Container Engine...
	Sep 27 00:55:35 ha-476000-m03 dockerd[1093]: time="2024-09-27T00:55:35.644572631Z" level=info msg="Starting up"
	Sep 27 00:56:35 ha-476000-m03 dockerd[1093]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 27 00:56:35 ha-476000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 27 00:56:35 ha-476000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 27 00:56:35 ha-476000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0926 17:56:35.574334    4178 out.go:270] * 
	W0926 17:56:35.575462    4178 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 17:56:35.658842    4178 out.go:201] 
	
	
	==> Docker <==
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.206048904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.206179384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 00:54:01 ha-476000 cri-dockerd[1422]: time="2024-09-27T00:54:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ded079a0572139d8da280864d2cf23e26a7a74761427fdb6aa8247bf1b618b63/resolv.conf as [nameserver 192.169.0.1]"
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.465946902Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.465995187Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.466006348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.466074171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 00:54:01 ha-476000 cri-dockerd[1422]: time="2024-09-27T00:54:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ef132416f65d445e2be52f1f35d402e4103f11df5abe57373ffacf06538460a2/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 27 00:54:01 ha-476000 cri-dockerd[1422]: time="2024-09-27T00:54:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/82fb727d3b4ab9beb6771fe42b02b13cfa819ec6e94565fc85eb5e44849131dc/resolv.conf as [nameserver 192.169.0.1]"
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.953799067Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.953836835Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.953845219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.953903701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.967774874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.968202742Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.968237276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 00:54:01 ha-476000 dockerd[1171]: time="2024-09-27T00:54:01.968864557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 00:54:32 ha-476000 dockerd[1165]: time="2024-09-27T00:54:32.331720830Z" level=info msg="ignoring event" container=182d3576c4be84b985fdcac8be52d4bc1daa03061cca1c9af1ea1719cc87ef93 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:54:32 ha-476000 dockerd[1171]: time="2024-09-27T00:54:32.332359122Z" level=info msg="shim disconnected" id=182d3576c4be84b985fdcac8be52d4bc1daa03061cca1c9af1ea1719cc87ef93 namespace=moby
	Sep 27 00:54:32 ha-476000 dockerd[1171]: time="2024-09-27T00:54:32.332548493Z" level=warning msg="cleaning up after shim disconnected" id=182d3576c4be84b985fdcac8be52d4bc1daa03061cca1c9af1ea1719cc87ef93 namespace=moby
	Sep 27 00:54:32 ha-476000 dockerd[1171]: time="2024-09-27T00:54:32.332589783Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 27 00:54:47 ha-476000 dockerd[1171]: time="2024-09-27T00:54:47.288497270Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 27 00:54:47 ha-476000 dockerd[1171]: time="2024-09-27T00:54:47.289077983Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 27 00:54:47 ha-476000 dockerd[1171]: time="2024-09-27T00:54:47.289196082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 00:54:47 ha-476000 dockerd[1171]: time="2024-09-27T00:54:47.289608100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b05b1fc6dccd2       6e38f40d628db                                                                                         7 minutes ago       Running             storage-provisioner       2                   82fb727d3b4ab       storage-provisioner
	182d3576c4be8       6e38f40d628db                                                                                         7 minutes ago       Exited              storage-provisioner       1                   82fb727d3b4ab       storage-provisioner
	1e068209398d4       8c811b4aec35f                                                                                         7 minutes ago       Running             busybox                   1                   ef132416f65d4       busybox-7dff88458-bvjrf
	3ab08f3aed771       60c005f310ff3                                                                                         7 minutes ago       Running             kube-proxy                1                   ded079a057213       kube-proxy-nrsx7
	13b4ae2edced3       12968670680f4                                                                                         7 minutes ago       Running             kindnet-cni               1                   aedbce80ab870       kindnet-lgj66
	bd209bf19cc97       c69fa2e9cbf5f                                                                                         7 minutes ago       Running             coredns                   1                   78def8c2a71e9       coredns-7c65d6cfc9-7jwgv
	fa6222acd1314       c69fa2e9cbf5f                                                                                         7 minutes ago       Running             coredns                   1                   c557d11d235a0       coredns-7c65d6cfc9-44l9n
	87e465b7b95f5       6bab7719df100                                                                                         7 minutes ago       Running             kube-apiserver            2                   84bf5bfc1db95       kube-apiserver-ha-476000
	01c5e9b4fab08       175ffd71cce3d                                                                                         7 minutes ago       Running             kube-controller-manager   2                   7a8e5df4a06d2       kube-controller-manager-ha-476000
	e50b7f6d45d34       38af8ddebf499                                                                                         8 minutes ago       Running             kube-vip                  0                   9ff0bf9fa82a1       kube-vip-ha-476000
	e923cc80604d7       9aa1fad941575                                                                                         8 minutes ago       Running             kube-scheduler            1                   14ddb9d9f440b       kube-scheduler-ha-476000
	89ad0e203b827       2e96e5913fc06                                                                                         8 minutes ago       Running             etcd                      1                   28300cd77661a       etcd-ha-476000
	d6683f4746762       6bab7719df100                                                                                         8 minutes ago       Exited              kube-apiserver            1                   84bf5bfc1db95       kube-apiserver-ha-476000
	06a5f950d0b27       175ffd71cce3d                                                                                         8 minutes ago       Exited              kube-controller-manager   1                   7a8e5df4a06d2       kube-controller-manager-ha-476000
	0fe8d9cd2d8d2       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   16 minutes ago      Exited              busybox                   0                   58dc7b4f775bb       busybox-7dff88458-bvjrf
	6e7030dd2319d       c69fa2e9cbf5f                                                                                         18 minutes ago      Exited              coredns                   0                   19d1dd5324d2b       coredns-7c65d6cfc9-7jwgv
	325909e950c7b       c69fa2e9cbf5f                                                                                         18 minutes ago      Exited              coredns                   0                   4de17e21e7a0f       coredns-7c65d6cfc9-44l9n
	730d4ab163e72       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              19 minutes ago      Exited              kindnet-cni               0                   30119aa4fc19b       kindnet-lgj66
	2d1ef1d1af27d       60c005f310ff3                                                                                         19 minutes ago      Exited              kube-proxy                0                   581372b45e58a       kube-proxy-nrsx7
	8b01a83a0b098       9aa1fad941575                                                                                         19 minutes ago      Exited              kube-scheduler            0                   c0232eed71fc3       kube-scheduler-ha-476000
	c08f45a78a8ec       2e96e5913fc06                                                                                         19 minutes ago      Exited              etcd                      0                   ff9ea0993276b       etcd-ha-476000
	
	
	==> coredns [325909e950c7] <==
	[INFO] 10.244.0.4:41413 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000172004s
	[INFO] 10.244.0.4:39923 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000145289s
	[INFO] 10.244.0.4:55894 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000153357s
	[INFO] 10.244.0.4:52696 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000059737s
	[INFO] 10.244.1.2:45922 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00008915s
	[INFO] 10.244.1.2:44828 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000111301s
	[INFO] 10.244.1.2:53232 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000116513s
	[INFO] 10.244.2.2:38669 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109219s
	[INFO] 10.244.2.2:51776 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000069559s
	[INFO] 10.244.2.2:34317 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000136009s
	[INFO] 10.244.2.2:35638 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001211s
	[INFO] 10.244.2.2:51345 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075754s
	[INFO] 10.244.0.4:53603 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110008s
	[INFO] 10.244.0.4:48703 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000116941s
	[INFO] 10.244.1.2:60563 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000101753s
	[INFO] 10.244.1.2:40746 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000119902s
	[INFO] 10.244.2.2:38053 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000094376s
	[INFO] 10.244.2.2:51713 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000069296s
	[INFO] 10.244.0.4:32805 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00008605s
	[INFO] 10.244.0.4:44664 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000292333s
	[INFO] 10.244.1.2:33360 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000078243s
	[INFO] 10.244.2.2:36409 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159318s
	[INFO] 10.244.2.2:36868 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000094303s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [6e7030dd2319] <==
	[INFO] 10.244.0.4:56870 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000085932s
	[INFO] 10.244.0.4:42671 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000180223s
	[INFO] 10.244.1.2:48098 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102353s
	[INFO] 10.244.1.2:56626 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.00009538s
	[INFO] 10.244.1.2:45195 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000135305s
	[INFO] 10.244.1.2:57387 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000073744s
	[INFO] 10.244.1.2:56567 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000045328s
	[INFO] 10.244.2.2:40253 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000077683s
	[INFO] 10.244.2.2:49008 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000110327s
	[INFO] 10.244.2.2:54182 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000061031s
	[INFO] 10.244.0.4:53519 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000087904s
	[INFO] 10.244.0.4:37380 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000132535s
	[INFO] 10.244.1.2:33397 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000128623s
	[INFO] 10.244.1.2:35879 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00014214s
	[INFO] 10.244.2.2:39230 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133513s
	[INFO] 10.244.2.2:47654 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000054424s
	[INFO] 10.244.0.4:59796 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00007443s
	[INFO] 10.244.0.4:49766 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000103812s
	[INFO] 10.244.1.2:36226 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102458s
	[INFO] 10.244.1.2:35698 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00010282s
	[INFO] 10.244.1.2:40757 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000066548s
	[INFO] 10.244.2.2:44488 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000148719s
	[INFO] 10.244.2.2:40024 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000069743s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [bd209bf19cc9] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:43213 - 10525 "HINFO IN 4125844120146388069.4027558012888257277. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.0104908s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1432599962]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Sep-2024 00:54:01.650) (total time: 30002ms):
	Trace[1432599962]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (00:54:31.653)
	Trace[1432599962]: [30.002427557s] [30.002427557s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[417897734]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Sep-2024 00:54:01.652) (total time: 30002ms):
	Trace[417897734]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (00:54:31.654)
	Trace[417897734]: [30.002368442s] [30.002368442s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1861937109]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Sep-2024 00:54:01.653) (total time: 30001ms):
	Trace[1861937109]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (00:54:31.654)
	Trace[1861937109]: [30.001494446s] [30.001494446s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [fa6222acd131] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:35702 - 33029 "HINFO IN 8241224091513256990.6666502665085127686. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009680676s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1899858293]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Sep-2024 00:54:01.665) (total time: 30001ms):
	Trace[1899858293]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (00:54:31.666)
	Trace[1899858293]: [30.001480741s] [30.001480741s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1985679635]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Sep-2024 00:54:01.669) (total time: 30000ms):
	Trace[1985679635]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (00:54:31.669)
	Trace[1985679635]: [30.000934597s] [30.000934597s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[345146888]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Sep-2024 00:54:01.669) (total time: 30003ms):
	Trace[345146888]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (00:54:31.673)
	Trace[345146888]: [30.003771613s] [30.003771613s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-476000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-476000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=ha-476000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_26T17_42_35_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 00:42:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-476000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 01:01:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 00:59:03 +0000   Fri, 27 Sep 2024 00:42:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 00:59:03 +0000   Fri, 27 Sep 2024 00:42:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 00:59:03 +0000   Fri, 27 Sep 2024 00:42:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 00:59:03 +0000   Fri, 27 Sep 2024 00:42:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-476000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 24c18e25f36040298bb96a7a31469c55
	  System UUID:                99cf4d4f-0000-0000-a72a-447af4e3b1db
	  Boot ID:                    8cf1f24c-8c01-4381-8f8f-6e75f77e6648
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-bvjrf              0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 coredns-7c65d6cfc9-44l9n             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     19m
	  kube-system                 coredns-7c65d6cfc9-7jwgv             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     19m
	  kube-system                 etcd-ha-476000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         19m
	  kube-system                 kindnet-lgj66                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      19m
	  kube-system                 kube-apiserver-ha-476000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-ha-476000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-nrsx7                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-ha-476000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-vip-ha-476000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m49s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m46s                  kube-proxy       
	  Normal  Starting                 19m                    kube-proxy       
	  Normal  Starting                 19m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)      kubelet          Node ha-476000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  19m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)      kubelet          Node ha-476000 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)      kubelet          Node ha-476000 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  19m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     19m                    kubelet          Node ha-476000 status is now: NodeHasSufficientPID
	  Normal  Starting                 19m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m                    kubelet          Node ha-476000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m                    kubelet          Node ha-476000 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           19m                    node-controller  Node ha-476000 event: Registered Node ha-476000 in Controller
	  Normal  NodeReady                18m                    kubelet          Node ha-476000 status is now: NodeReady
	  Normal  RegisteredNode           18m                    node-controller  Node ha-476000 event: Registered Node ha-476000 in Controller
	  Normal  RegisteredNode           16m                    node-controller  Node ha-476000 event: Registered Node ha-476000 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-476000 event: Registered Node ha-476000 in Controller
	  Normal  Starting                 8m30s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m30s (x8 over 8m30s)  kubelet          Node ha-476000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m30s (x8 over 8m30s)  kubelet          Node ha-476000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m30s (x7 over 8m30s)  kubelet          Node ha-476000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m57s                  node-controller  Node ha-476000 event: Registered Node ha-476000 in Controller
	  Normal  RegisteredNode           7m43s                  node-controller  Node ha-476000 event: Registered Node ha-476000 in Controller
	
	
	Name:               ha-476000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-476000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=ha-476000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_26T17_43_38_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 00:43:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-476000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 01:01:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 00:59:06 +0000   Fri, 27 Sep 2024 00:43:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 00:59:06 +0000   Fri, 27 Sep 2024 00:43:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 00:59:06 +0000   Fri, 27 Sep 2024 00:43:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 00:59:06 +0000   Fri, 27 Sep 2024 00:54:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-476000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 35bc971223ac4e939cad535ac89bc725
	  System UUID:                58f4445b-0000-0000-bae0-ab27a7b8106e
	  Boot ID:                    7dcb1bbe-ca7a-45f1-9dd9-dc673285b7e4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-gvp8q                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 etcd-ha-476000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         18m
	  kube-system                 kindnet-hhrtc                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      18m
	  kube-system                 kube-apiserver-ha-476000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-controller-manager-ha-476000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-proxy-ctdh4                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-scheduler-ha-476000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-vip-ha-476000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 18m                  kube-proxy       
	  Normal   Starting                 7m29s                kube-proxy       
	  Normal   Starting                 14m                  kube-proxy       
	  Normal   NodeHasSufficientMemory  18m (x8 over 18m)    kubelet          Node ha-476000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18m (x8 over 18m)    kubelet          Node ha-476000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18m (x7 over 18m)    kubelet          Node ha-476000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  18m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           18m                  node-controller  Node ha-476000-m02 event: Registered Node ha-476000-m02 in Controller
	  Normal   RegisteredNode           18m                  node-controller  Node ha-476000-m02 event: Registered Node ha-476000-m02 in Controller
	  Normal   RegisteredNode           16m                  node-controller  Node ha-476000-m02 event: Registered Node ha-476000-m02 in Controller
	  Normal   NodeAllocatableEnforced  14m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 14m                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  14m                  kubelet          Node ha-476000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m                  kubelet          Node ha-476000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m                  kubelet          Node ha-476000-m02 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 14m                  kubelet          Node ha-476000-m02 has been rebooted, boot id: 993826c6-3fde-4076-a7cb-33cc19f1b1bc
	  Normal   RegisteredNode           14m                  node-controller  Node ha-476000-m02 event: Registered Node ha-476000-m02 in Controller
	  Normal   NodeHasNoDiskPressure    8m9s (x8 over 8m9s)  kubelet          Node ha-476000-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 8m9s                 kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  8m9s (x8 over 8m9s)  kubelet          Node ha-476000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     8m9s (x7 over 8m9s)  kubelet          Node ha-476000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  8m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           7m57s                node-controller  Node ha-476000-m02 event: Registered Node ha-476000-m02 in Controller
	  Normal   RegisteredNode           7m43s                node-controller  Node ha-476000-m02 event: Registered Node ha-476000-m02 in Controller
	
	
	Name:               ha-476000-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-476000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=ha-476000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_26T17_44_54_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 00:44:51 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-476000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 00:47:14 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 27 Sep 2024 00:45:53 +0000   Fri, 27 Sep 2024 00:54:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 27 Sep 2024 00:45:53 +0000   Fri, 27 Sep 2024 00:54:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 27 Sep 2024 00:45:53 +0000   Fri, 27 Sep 2024 00:54:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 27 Sep 2024 00:45:53 +0000   Fri, 27 Sep 2024 00:54:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.7
	  Hostname:    ha-476000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 365f6a31a3d140dba5c1be3b08da7ad2
	  System UUID:                91a54c64-0000-0000-acd8-a07fa14dbb0d
	  Boot ID:                    4ca02f6d-4375-4909-8877-3e005809b499
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-jgndj                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 etcd-ha-476000-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kindnet-4pnxr                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	  kube-system                 kube-apiserver-ha-476000-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-ha-476000-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-bpsqv                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-ha-476000-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-vip-ha-476000-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node ha-476000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node ha-476000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node ha-476000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16m                node-controller  Node ha-476000-m03 event: Registered Node ha-476000-m03 in Controller
	  Normal  RegisteredNode           16m                node-controller  Node ha-476000-m03 event: Registered Node ha-476000-m03 in Controller
	  Normal  RegisteredNode           16m                node-controller  Node ha-476000-m03 event: Registered Node ha-476000-m03 in Controller
	  Normal  RegisteredNode           14m                node-controller  Node ha-476000-m03 event: Registered Node ha-476000-m03 in Controller
	  Normal  RegisteredNode           7m57s              node-controller  Node ha-476000-m03 event: Registered Node ha-476000-m03 in Controller
	  Normal  RegisteredNode           7m43s              node-controller  Node ha-476000-m03 event: Registered Node ha-476000-m03 in Controller
	  Normal  NodeNotReady             7m17s              node-controller  Node ha-476000-m03 status is now: NodeNotReady
	
	
	Name:               ha-476000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-476000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=ha-476000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_26T17_45_52_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 00:45:52 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-476000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 00:47:13 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 27 Sep 2024 00:46:22 +0000   Fri, 27 Sep 2024 00:54:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 27 Sep 2024 00:46:22 +0000   Fri, 27 Sep 2024 00:54:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 27 Sep 2024 00:46:22 +0000   Fri, 27 Sep 2024 00:54:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 27 Sep 2024 00:46:22 +0000   Fri, 27 Sep 2024 00:54:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-476000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 7bdc03e4e33a47a0a7d85ecb664669d4
	  System UUID:                dcce4501-0000-0000-a378-25a085ede049
	  Boot ID:                    b0d71ae5-8550-430a-94b7-e146e65fc279
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-44vxl       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	  kube-system                 kube-proxy-5d8nb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  NodeHasSufficientPID     15m (x2 over 15m)  kubelet          Node ha-476000-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x2 over 15m)  kubelet          Node ha-476000-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x2 over 15m)  kubelet          Node ha-476000-m04 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           15m                node-controller  Node ha-476000-m04 event: Registered Node ha-476000-m04 in Controller
	  Normal  RegisteredNode           15m                node-controller  Node ha-476000-m04 event: Registered Node ha-476000-m04 in Controller
	  Normal  RegisteredNode           15m                node-controller  Node ha-476000-m04 event: Registered Node ha-476000-m04 in Controller
	  Normal  NodeReady                15m                kubelet          Node ha-476000-m04 status is now: NodeReady
	  Normal  RegisteredNode           14m                node-controller  Node ha-476000-m04 event: Registered Node ha-476000-m04 in Controller
	  Normal  RegisteredNode           7m57s              node-controller  Node ha-476000-m04 event: Registered Node ha-476000-m04 in Controller
	  Normal  RegisteredNode           7m43s              node-controller  Node ha-476000-m04 event: Registered Node ha-476000-m04 in Controller
	  Normal  NodeNotReady             7m17s              node-controller  Node ha-476000-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.036532] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.006931] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.697129] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000003] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006778] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.775372] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.244387] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.695216] systemd-fstab-generator[461]: Ignoring "noauto" option for root device
	[  +0.101404] systemd-fstab-generator[473]: Ignoring "noauto" option for root device
	[  +1.958371] systemd-fstab-generator[1093]: Ignoring "noauto" option for root device
	[  +0.251045] systemd-fstab-generator[1131]: Ignoring "noauto" option for root device
	[  +0.050021] kauditd_printk_skb: 101 callbacks suppressed
	[  +0.047173] systemd-fstab-generator[1143]: Ignoring "noauto" option for root device
	[  +0.112931] systemd-fstab-generator[1157]: Ignoring "noauto" option for root device
	[  +2.468376] systemd-fstab-generator[1375]: Ignoring "noauto" option for root device
	[  +0.117710] systemd-fstab-generator[1387]: Ignoring "noauto" option for root device
	[  +0.113441] systemd-fstab-generator[1399]: Ignoring "noauto" option for root device
	[  +0.129593] systemd-fstab-generator[1414]: Ignoring "noauto" option for root device
	[  +0.427728] systemd-fstab-generator[1574]: Ignoring "noauto" option for root device
	[  +6.920294] kauditd_printk_skb: 212 callbacks suppressed
	[ +21.597968] kauditd_printk_skb: 40 callbacks suppressed
	[Sep27 00:54] kauditd_printk_skb: 94 callbacks suppressed
	
	
	==> etcd [89ad0e203b82] <==
	{"level":"warn","ts":"2024-09-27T01:00:46.604787Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T01:00:51.605567Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T01:00:51.605673Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T01:00:56.606564Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T01:00:56.606617Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T01:01:01.606696Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T01:01:01.606844Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T01:01:06.607662Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T01:01:06.607748Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T01:01:11.608840Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T01:01:11.608806Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T01:01:16.609867Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T01:01:16.610015Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T01:01:21.611095Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T01:01:21.611167Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T01:01:26.611861Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T01:01:26.611915Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T01:01:31.613088Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T01:01:31.613058Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T01:01:36.613353Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T01:01:36.613417Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T01:01:41.613941Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T01:01:41.613954Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T01:01:46.614609Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T01:01:46.614656Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a5e8f6083b0b81f2","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	
	
	==> etcd [c08f45a78a8e] <==
	{"level":"warn","ts":"2024-09-27T00:47:41.542035Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-27T00:47:33.744957Z","time spent":"7.797074842s","remote":"127.0.0.1:40790","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":0,"response size":0,"request content":"key:\"/registry/networkpolicies/\" range_end:\"/registry/networkpolicies0\" count_only:true "}
	2024/09/27 00:47:41 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-27T00:47:41.542079Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"299.225057ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-09-27T00:47:41.542107Z","caller":"traceutil/trace.go:171","msg":"trace[2123825160] range","detail":"{range_begin:/registry/health; range_end:; }","duration":"299.252922ms","start":"2024-09-27T00:47:41.242851Z","end":"2024-09-27T00:47:41.542104Z","steps":["trace[2123825160] 'agreement among raft nodes before linearized reading'  (duration: 299.224906ms)"],"step_count":1}
	2024/09/27 00:47:41 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-27T00:47:41.593990Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-27T00:47:41.594018Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-27T00:47:41.602616Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-27T00:47:41.604582Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"c11bfa8a53277726"}
	{"level":"info","ts":"2024-09-27T00:47:41.604604Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"c11bfa8a53277726"}
	{"level":"info","ts":"2024-09-27T00:47:41.604619Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"c11bfa8a53277726"}
	{"level":"info","ts":"2024-09-27T00:47:41.604716Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c11bfa8a53277726"}
	{"level":"info","ts":"2024-09-27T00:47:41.604762Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c11bfa8a53277726"}
	{"level":"info","ts":"2024-09-27T00:47:41.604790Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c11bfa8a53277726"}
	{"level":"info","ts":"2024-09-27T00:47:41.604798Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"c11bfa8a53277726"}
	{"level":"info","ts":"2024-09-27T00:47:41.604802Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"a5e8f6083b0b81f2"}
	{"level":"info","ts":"2024-09-27T00:47:41.604809Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"a5e8f6083b0b81f2"}
	{"level":"info","ts":"2024-09-27T00:47:41.604819Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"a5e8f6083b0b81f2"}
	{"level":"info","ts":"2024-09-27T00:47:41.605484Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"a5e8f6083b0b81f2"}
	{"level":"info","ts":"2024-09-27T00:47:41.605507Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"a5e8f6083b0b81f2"}
	{"level":"info","ts":"2024-09-27T00:47:41.605546Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"a5e8f6083b0b81f2"}
	{"level":"info","ts":"2024-09-27T00:47:41.605556Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"a5e8f6083b0b81f2"}
	{"level":"info","ts":"2024-09-27T00:47:41.607550Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-09-27T00:47:41.607595Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-09-27T00:47:41.607615Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-476000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> kernel <==
	 01:01:49 up 8 min,  0 users,  load average: 0.28, 0.33, 0.20
	Linux ha-476000 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [13b4ae2edced] <==
	I0927 01:01:12.491073       1 main.go:299] handling current node
	I0927 01:01:22.489963       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0927 01:01:22.490013       1 main.go:299] handling current node
	I0927 01:01:22.490027       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0927 01:01:22.490033       1 main.go:322] Node ha-476000-m02 has CIDR [10.244.1.0/24] 
	I0927 01:01:22.490309       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0927 01:01:22.490350       1 main.go:322] Node ha-476000-m03 has CIDR [10.244.2.0/24] 
	I0927 01:01:22.490406       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0927 01:01:22.490538       1 main.go:322] Node ha-476000-m04 has CIDR [10.244.3.0/24] 
	I0927 01:01:32.489725       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0927 01:01:32.489842       1 main.go:299] handling current node
	I0927 01:01:32.490043       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0927 01:01:32.490178       1 main.go:322] Node ha-476000-m02 has CIDR [10.244.1.0/24] 
	I0927 01:01:32.490485       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0927 01:01:32.490613       1 main.go:322] Node ha-476000-m03 has CIDR [10.244.2.0/24] 
	I0927 01:01:32.490780       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0927 01:01:32.490865       1 main.go:322] Node ha-476000-m04 has CIDR [10.244.3.0/24] 
	I0927 01:01:42.491359       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0927 01:01:42.491399       1 main.go:299] handling current node
	I0927 01:01:42.491410       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0927 01:01:42.491415       1 main.go:322] Node ha-476000-m02 has CIDR [10.244.1.0/24] 
	I0927 01:01:42.491596       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0927 01:01:42.491623       1 main.go:322] Node ha-476000-m03 has CIDR [10.244.2.0/24] 
	I0927 01:01:42.491779       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0927 01:01:42.491805       1 main.go:322] Node ha-476000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [730d4ab163e7] <==
	I0927 00:47:03.705461       1 main.go:322] Node ha-476000-m04 has CIDR [10.244.3.0/24] 
	I0927 00:47:13.713791       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0927 00:47:13.713985       1 main.go:299] handling current node
	I0927 00:47:13.714102       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0927 00:47:13.714214       1 main.go:322] Node ha-476000-m02 has CIDR [10.244.1.0/24] 
	I0927 00:47:13.714414       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0927 00:47:13.714545       1 main.go:322] Node ha-476000-m03 has CIDR [10.244.2.0/24] 
	I0927 00:47:13.714946       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0927 00:47:13.715065       1 main.go:322] Node ha-476000-m04 has CIDR [10.244.3.0/24] 
	I0927 00:47:23.710748       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0927 00:47:23.710778       1 main.go:322] Node ha-476000-m02 has CIDR [10.244.1.0/24] 
	I0927 00:47:23.710966       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0927 00:47:23.711202       1 main.go:322] Node ha-476000-m03 has CIDR [10.244.2.0/24] 
	I0927 00:47:23.711295       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0927 00:47:23.711303       1 main.go:322] Node ha-476000-m04 has CIDR [10.244.3.0/24] 
	I0927 00:47:23.711508       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0927 00:47:23.711595       1 main.go:299] handling current node
	I0927 00:47:33.704824       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0927 00:47:33.704897       1 main.go:322] Node ha-476000-m02 has CIDR [10.244.1.0/24] 
	I0927 00:47:33.705242       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0927 00:47:33.705307       1 main.go:322] Node ha-476000-m03 has CIDR [10.244.2.0/24] 
	I0927 00:47:33.705486       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0927 00:47:33.705818       1 main.go:322] Node ha-476000-m04 has CIDR [10.244.3.0/24] 
	I0927 00:47:33.705995       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0927 00:47:33.706008       1 main.go:299] handling current node
	
	
	==> kube-apiserver [87e465b7b95f] <==
	I0927 00:54:02.884947       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0927 00:54:02.884955       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0927 00:54:02.943365       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0927 00:54:02.943570       1 policy_source.go:224] refreshing policies
	I0927 00:54:02.949648       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0927 00:54:02.975777       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0927 00:54:02.975897       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0927 00:54:02.975835       1 shared_informer.go:320] Caches are synced for configmaps
	I0927 00:54:02.976591       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0927 00:54:02.977323       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0927 00:54:02.977419       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0927 00:54:02.977565       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0927 00:54:02.982008       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0927 00:54:02.982182       1 shared_informer.go:320] Caches are synced for node_authorizer
	W0927 00:54:02.987432       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.6]
	I0927 00:54:02.987619       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0927 00:54:02.987707       1 aggregator.go:171] initial CRD sync complete...
	I0927 00:54:02.987750       1 autoregister_controller.go:144] Starting autoregister controller
	I0927 00:54:02.987857       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0927 00:54:02.987898       1 cache.go:39] Caches are synced for autoregister controller
	I0927 00:54:02.988709       1 controller.go:615] quota admission added evaluator for: endpoints
	I0927 00:54:02.993982       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0927 00:54:02.997126       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0927 00:54:03.884450       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0927 00:54:04.211694       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5 192.169.0.6]
	
	
	==> kube-apiserver [d6683f474676] <==
	I0927 00:53:26.693239       1 options.go:228] external host was not specified, using 192.169.0.5
	I0927 00:53:26.695952       1 server.go:142] Version: v1.31.1
	I0927 00:53:26.696173       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 00:53:27.299904       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0927 00:53:27.320033       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0927 00:53:27.330041       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0927 00:53:27.330098       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0927 00:53:27.332141       1 instance.go:232] Using reconciler: lease
	W0927 00:53:47.293920       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0927 00:53:47.294149       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0927 00:53:47.333433       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [01c5e9b4fab0] <==
	I0927 00:54:07.185942       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.202µs"
	I0927 00:54:09.276645       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="61.828631ms"
	I0927 00:54:09.276726       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="27.067µs"
	I0927 00:54:32.998333       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-476000-m04"
	I0927 00:54:32.998470       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-476000-m03"
	I0927 00:54:33.020582       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-476000-m03"
	I0927 00:54:33.020882       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-476000-m04"
	I0927 00:54:33.070337       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="7.029804ms"
	I0927 00:54:33.070565       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="96.493µs"
	I0927 00:54:36.474604       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-476000-m03"
	I0927 00:54:38.190557       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-476000-m03"
	I0927 00:54:40.584626       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-h7qwt EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-h7qwt\": the object has been modified; please apply your changes to the latest version and try again"
	I0927 00:54:40.585022       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"3537638a-d8ae-4b35-b930-21aeb412efa9", APIVersion:"v1", ResourceVersion:"270", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-h7qwt EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-h7qwt": the object has been modified; please apply your changes to the latest version and try again
	I0927 00:54:40.589666       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="48.410037ms"
	I0927 00:54:40.614904       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="25.040724ms"
	I0927 00:54:40.615187       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="45.324µs"
	I0927 00:54:46.573579       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-476000-m04"
	I0927 00:54:48.277366       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-476000-m04"
	I0927 00:59:03.699041       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-476000"
	I0927 00:59:06.173964       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-476000-m02"
	I0927 00:59:36.474889       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7dff88458-jgndj"
	I0927 00:59:36.494985       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="35.479µs"
	I0927 00:59:36.562600       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="59.647863ms"
	I0927 00:59:36.603961       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="41.297548ms"
	I0927 00:59:36.604297       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="195.359µs"
	
	
	==> kube-controller-manager [06a5f950d0b2] <==
	I0927 00:53:27.325939       1 serving.go:386] Generated self-signed cert in-memory
	I0927 00:53:28.243164       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0927 00:53:28.243279       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 00:53:28.245422       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0927 00:53:28.245777       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0927 00:53:28.245999       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0927 00:53:28.246030       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0927 00:53:48.339070       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.169.0.5:8443/healthz\": dial tcp 192.169.0.5:8443: connect: connection refused"
	
	
	==> kube-proxy [2d1ef1d1af27] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0927 00:42:39.294950       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0927 00:42:39.305827       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0927 00:42:39.314387       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 00:42:39.360026       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0927 00:42:39.360068       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0927 00:42:39.360085       1 server_linux.go:169] "Using iptables Proxier"
	I0927 00:42:39.362140       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 00:42:39.362382       1 server.go:483] "Version info" version="v1.31.1"
	I0927 00:42:39.362411       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 00:42:39.365397       1 config.go:199] "Starting service config controller"
	I0927 00:42:39.365470       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 00:42:39.365636       1 config.go:105] "Starting endpoint slice config controller"
	I0927 00:42:39.365692       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 00:42:39.366725       1 config.go:328] "Starting node config controller"
	I0927 00:42:39.366799       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 00:42:39.466084       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0927 00:42:39.466107       1 shared_informer.go:320] Caches are synced for service config
	I0927 00:42:39.468057       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [3ab08f3aed77] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0927 00:54:02.572463       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0927 00:54:02.595215       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0927 00:54:02.595477       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 00:54:02.710300       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0927 00:54:02.710322       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0927 00:54:02.710339       1 server_linux.go:169] "Using iptables Proxier"
	I0927 00:54:02.714167       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 00:54:02.715628       1 server.go:483] "Version info" version="v1.31.1"
	I0927 00:54:02.715707       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 00:54:02.718471       1 config.go:199] "Starting service config controller"
	I0927 00:54:02.719333       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 00:54:02.719741       1 config.go:105] "Starting endpoint slice config controller"
	I0927 00:54:02.719810       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 00:54:02.721272       1 config.go:328] "Starting node config controller"
	I0927 00:54:02.721390       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 00:54:02.820358       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0927 00:54:02.820547       1 shared_informer.go:320] Caches are synced for service config
	I0927 00:54:02.824323       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8b01a83a0b09] <==
	E0927 00:45:52.380874       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-mm66p\": pod kube-proxy-mm66p is already assigned to node \"ha-476000-m04\"" pod="kube-system/kube-proxy-mm66p"
	E0927 00:45:52.381463       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-44vxl\": pod kindnet-44vxl is already assigned to node \"ha-476000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-44vxl" node="ha-476000-m04"
	E0927 00:45:52.381533       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 488a3806-d7c1-4397-bff8-00d9ea3cb984(kube-system/kindnet-44vxl) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-44vxl"
	E0927 00:45:52.381617       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-44vxl\": pod kindnet-44vxl is already assigned to node \"ha-476000-m04\"" pod="kube-system/kindnet-44vxl"
	I0927 00:45:52.381654       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-44vxl" node="ha-476000-m04"
	E0927 00:45:52.382881       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-gtnxm\": pod kindnet-gtnxm is already assigned to node \"ha-476000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-gtnxm" node="ha-476000-m04"
	E0927 00:45:52.383371       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod c96b1801-d5cd-47bc-8555-43224fd5668c(kube-system/kindnet-gtnxm) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-gtnxm"
	E0927 00:45:52.383419       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-gtnxm\": pod kindnet-gtnxm is already assigned to node \"ha-476000-m04\"" pod="kube-system/kindnet-gtnxm"
	I0927 00:45:52.383438       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-gtnxm" node="ha-476000-m04"
	E0927 00:45:52.385915       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-5d8nb\": pod kube-proxy-5d8nb is already assigned to node \"ha-476000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-5d8nb" node="ha-476000-m04"
	E0927 00:45:52.386403       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 1e9c0178-2d58-4ca1-9fbe-c8e54d91bf1a(kube-system/kube-proxy-5d8nb) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-5d8nb"
	E0927 00:45:52.388489       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-5d8nb\": pod kube-proxy-5d8nb is already assigned to node \"ha-476000-m04\"" pod="kube-system/kube-proxy-5d8nb"
	I0927 00:45:52.388818       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-5d8nb" node="ha-476000-m04"
	E0927 00:45:52.414440       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-p2r4t\": pod kindnet-p2r4t is already assigned to node \"ha-476000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-p2r4t" node="ha-476000-m04"
	E0927 00:45:52.414491       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e7daae81-cf6d-498e-9458-8613a0c1f174(kube-system/kindnet-p2r4t) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-p2r4t"
	E0927 00:45:52.414504       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-p2r4t\": pod kindnet-p2r4t is already assigned to node \"ha-476000-m04\"" pod="kube-system/kindnet-p2r4t"
	I0927 00:45:52.414830       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-p2r4t" node="ha-476000-m04"
	E0927 00:45:52.434469       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-f2tbl\": pod kube-proxy-f2tbl is already assigned to node \"ha-476000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-f2tbl" node="ha-476000-m04"
	E0927 00:45:52.434547       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod ce1fa3d7-adbb-4d4d-bd23-a1e60ee54d5b(kube-system/kube-proxy-f2tbl) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-f2tbl"
	E0927 00:45:52.434998       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-f2tbl\": pod kube-proxy-f2tbl is already assigned to node \"ha-476000-m04\"" pod="kube-system/kube-proxy-f2tbl"
	I0927 00:45:52.435043       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-f2tbl" node="ha-476000-m04"
	I0927 00:47:41.631073       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0927 00:47:41.633242       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0927 00:47:41.634639       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0927 00:47:41.635978       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e923cc80604d] <==
	W0927 00:53:55.890712       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0927 00:53:55.890825       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:53:55.916618       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0927 00:53:55.916669       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:53:56.112443       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0927 00:53:56.112541       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:53:56.325586       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0927 00:53:56.325680       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:53:56.333523       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0927 00:53:56.333592       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:53:57.242866       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0927 00:53:57.243040       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:53:57.398430       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0927 00:53:57.398522       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:53:57.562966       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0927 00:53:57.563196       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:53:58.300576       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0927 00:53:58.300855       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:53:58.356734       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0927 00:53:58.356802       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:54:02.892809       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0927 00:54:02.892856       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 00:54:02.893077       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0927 00:54:02.893208       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0927 00:54:02.956308       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 27 00:57:19 ha-476000 kubelet[1581]: E0927 00:57:19.247466    1581 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 27 00:57:19 ha-476000 kubelet[1581]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 27 00:57:19 ha-476000 kubelet[1581]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 27 00:57:19 ha-476000 kubelet[1581]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 00:57:19 ha-476000 kubelet[1581]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 27 00:58:19 ha-476000 kubelet[1581]: E0927 00:58:19.248304    1581 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 27 00:58:19 ha-476000 kubelet[1581]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 27 00:58:19 ha-476000 kubelet[1581]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 27 00:58:19 ha-476000 kubelet[1581]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 00:58:19 ha-476000 kubelet[1581]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 27 00:59:19 ha-476000 kubelet[1581]: E0927 00:59:19.247941    1581 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 27 00:59:19 ha-476000 kubelet[1581]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 27 00:59:19 ha-476000 kubelet[1581]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 27 00:59:19 ha-476000 kubelet[1581]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 00:59:19 ha-476000 kubelet[1581]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 27 01:00:19 ha-476000 kubelet[1581]: E0927 01:00:19.248217    1581 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 27 01:00:19 ha-476000 kubelet[1581]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 27 01:00:19 ha-476000 kubelet[1581]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 27 01:00:19 ha-476000 kubelet[1581]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 01:00:19 ha-476000 kubelet[1581]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 27 01:01:19 ha-476000 kubelet[1581]: E0927 01:01:19.248364    1581 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 27 01:01:19 ha-476000 kubelet[1581]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 27 01:01:19 ha-476000 kubelet[1581]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 27 01:01:19 ha-476000 kubelet[1581]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 01:01:19 ha-476000 kubelet[1581]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-476000 -n ha-476000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-476000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7dff88458-qwrlx
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-476000 describe pod busybox-7dff88458-qwrlx
helpers_test.go:282: (dbg) kubectl --context ha-476000 describe pod busybox-7dff88458-qwrlx:

                                                
                                                
-- stdout --
	Name:             busybox-7dff88458-qwrlx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7dff88458
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7dff88458
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lg2sq (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-lg2sq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age    From               Message
	  ----     ------            ----   ----               -------
	  Warning  FailedScheduling  2m15s  default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  2m15s  default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (4.77s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (136.58s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-398000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit 
E0926 18:06:17.545380    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p mount-start-1-398000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit : exit status 80 (2m16.498534845s)

                                                
                                                
-- stdout --
	* [mount-start-1-398000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1128/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1128/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting minikube without Kubernetes in cluster mount-start-1-398000
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "mount-start-1-398000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 16:e6:ae:b8:2b:c
	* Failed to start hyperkit VM. Running "minikube delete -p mount-start-1-398000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 76:9b:c5:85:ff:ca
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 76:9b:c5:85:ff:ca
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-amd64 start -p mount-start-1-398000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-398000 -n mount-start-1-398000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-398000 -n mount-start-1-398000: exit status 7 (79.100471ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0926 18:08:10.583300    5006 status.go:386] failed to get driver ip: getting IP: IP address is not set
	E0926 18:08:10.583320    5006 status.go:119] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-398000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMountStart/serial/StartWithMountFirst (136.58s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (141.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-108000 --wait=true -v=8 --alsologtostderr --driver=hyperkit 
E0926 18:15:36.892223    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/functional-748000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-108000 --wait=true -v=8 --alsologtostderr --driver=hyperkit : exit status 90 (2m17.609548527s)

                                                
                                                
-- stdout --
	* [multinode-108000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1128/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1128/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "multinode-108000" primary control-plane node in "multinode-108000" cluster
	* Restarting existing hyperkit VM for "multinode-108000" ...
	* Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	
	* Starting "multinode-108000-m02" worker node in "multinode-108000" cluster
	* Restarting existing hyperkit VM for "multinode-108000-m02" ...
	* Found network options:
	  - NO_PROXY=192.169.0.14
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 18:15:10.750251    5496 out.go:345] Setting OutFile to fd 1 ...
	I0926 18:15:10.750510    5496 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:15:10.750516    5496 out.go:358] Setting ErrFile to fd 2...
	I0926 18:15:10.750520    5496 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:15:10.750705    5496 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1128/.minikube/bin
	I0926 18:15:10.752073    5496 out.go:352] Setting JSON to false
	I0926 18:15:10.775187    5496 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":4480,"bootTime":1727395230,"procs":439,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0926 18:15:10.775336    5496 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 18:15:10.796791    5496 out.go:177] * [multinode-108000] minikube v1.34.0 on Darwin 14.6.1
	I0926 18:15:10.839687    5496 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 18:15:10.839724    5496 notify.go:220] Checking for updates...
	I0926 18:15:10.882369    5496 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1128/kubeconfig
	I0926 18:15:10.903644    5496 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0926 18:15:10.924697    5496 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 18:15:10.945384    5496 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1128/.minikube
	I0926 18:15:10.966653    5496 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 18:15:10.988445    5496 config.go:182] Loaded profile config "multinode-108000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 18:15:10.989143    5496 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 18:15:10.989216    5496 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 18:15:10.998872    5496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53362
	I0926 18:15:10.999243    5496 main.go:141] libmachine: () Calling .GetVersion
	I0926 18:15:10.999629    5496 main.go:141] libmachine: Using API Version  1
	I0926 18:15:10.999639    5496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 18:15:10.999883    5496 main.go:141] libmachine: () Calling .GetMachineName
	I0926 18:15:10.999986    5496 main.go:141] libmachine: (multinode-108000) Calling .DriverName
	I0926 18:15:11.000169    5496 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 18:15:11.000432    5496 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 18:15:11.000459    5496 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 18:15:11.008768    5496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53364
	I0926 18:15:11.009105    5496 main.go:141] libmachine: () Calling .GetVersion
	I0926 18:15:11.009453    5496 main.go:141] libmachine: Using API Version  1
	I0926 18:15:11.009466    5496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 18:15:11.009674    5496 main.go:141] libmachine: () Calling .GetMachineName
	I0926 18:15:11.009812    5496 main.go:141] libmachine: (multinode-108000) Calling .DriverName
	I0926 18:15:11.038448    5496 out.go:177] * Using the hyperkit driver based on existing profile
	I0926 18:15:11.080596    5496 start.go:297] selected driver: hyperkit
	I0926 18:15:11.080625    5496 start.go:901] validating driver "hyperkit" against &{Name:multinode-108000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.31.1 ClusterName:multinode-108000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.15 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logv
iewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 18:15:11.080871    5496 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 18:15:11.081068    5496 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:15:11.081299    5496 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19711-1128/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0926 18:15:11.091103    5496 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0926 18:15:11.094863    5496 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 18:15:11.094881    5496 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0926 18:15:11.097842    5496 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 18:15:11.097884    5496 cni.go:84] Creating CNI manager for ""
	I0926 18:15:11.097930    5496 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0926 18:15:11.098006    5496 start.go:340] cluster config:
	{Name:multinode-108000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-108000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.15 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-d
river-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 18:15:11.098103    5496 iso.go:125] acquiring lock: {Name:mka8a9c5a237c1e4ae233281d2ff7965d13a843d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:15:11.140576    5496 out.go:177] * Starting "multinode-108000" primary control-plane node in "multinode-108000" cluster
	I0926 18:15:11.161702    5496 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 18:15:11.161786    5496 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0926 18:15:11.161813    5496 cache.go:56] Caching tarball of preloaded images
	I0926 18:15:11.162007    5496 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0926 18:15:11.162026    5496 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 18:15:11.162207    5496 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/multinode-108000/config.json ...
	I0926 18:15:11.163098    5496 start.go:360] acquireMachinesLock for multinode-108000: {Name:mk62b0ec9b6170d1971239573141a625c4a2621e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:15:11.163244    5496 start.go:364] duration metric: took 123.219µs to acquireMachinesLock for "multinode-108000"
	I0926 18:15:11.163281    5496 start.go:96] Skipping create...Using existing machine configuration
	I0926 18:15:11.163297    5496 fix.go:54] fixHost starting: 
	I0926 18:15:11.163724    5496 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 18:15:11.163750    5496 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 18:15:11.172811    5496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53366
	I0926 18:15:11.173221    5496 main.go:141] libmachine: () Calling .GetVersion
	I0926 18:15:11.173642    5496 main.go:141] libmachine: Using API Version  1
	I0926 18:15:11.173653    5496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 18:15:11.174043    5496 main.go:141] libmachine: () Calling .GetMachineName
	I0926 18:15:11.174185    5496 main.go:141] libmachine: (multinode-108000) Calling .DriverName
	I0926 18:15:11.174331    5496 main.go:141] libmachine: (multinode-108000) Calling .GetState
	I0926 18:15:11.174419    5496 main.go:141] libmachine: (multinode-108000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:15:11.174522    5496 main.go:141] libmachine: (multinode-108000) DBG | hyperkit pid from json: 5408
	I0926 18:15:11.175429    5496 main.go:141] libmachine: (multinode-108000) DBG | hyperkit pid 5408 missing from process table
	I0926 18:15:11.175457    5496 fix.go:112] recreateIfNeeded on multinode-108000: state=Stopped err=<nil>
	I0926 18:15:11.175474    5496 main.go:141] libmachine: (multinode-108000) Calling .DriverName
	W0926 18:15:11.175568    5496 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 18:15:11.196340    5496 out.go:177] * Restarting existing hyperkit VM for "multinode-108000" ...
	I0926 18:15:11.238414    5496 main.go:141] libmachine: (multinode-108000) Calling .Start
	I0926 18:15:11.238592    5496 main.go:141] libmachine: (multinode-108000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:15:11.238630    5496 main.go:141] libmachine: (multinode-108000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/hyperkit.pid
	I0926 18:15:11.239933    5496 main.go:141] libmachine: (multinode-108000) DBG | hyperkit pid 5408 missing from process table
	I0926 18:15:11.239948    5496 main.go:141] libmachine: (multinode-108000) DBG | pid 5408 is in state "Stopped"
	I0926 18:15:11.239966    5496 main.go:141] libmachine: (multinode-108000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/hyperkit.pid...
	I0926 18:15:11.240316    5496 main.go:141] libmachine: (multinode-108000) DBG | Using UUID 1fff9e18-98b5-4af0-b682-f00d5d335588
	I0926 18:15:11.349220    5496 main.go:141] libmachine: (multinode-108000) DBG | Generated MAC 6e:13:d0:11:59:38
	I0926 18:15:11.349243    5496 main.go:141] libmachine: (multinode-108000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-108000
	I0926 18:15:11.349450    5496 main.go:141] libmachine: (multinode-108000) DBG | 2024/09/26 18:15:11 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1fff9e18-98b5-4af0-b682-f00d5d335588", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aaba0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0926 18:15:11.349490    5496 main.go:141] libmachine: (multinode-108000) DBG | 2024/09/26 18:15:11 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1fff9e18-98b5-4af0-b682-f00d5d335588", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aaba0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0926 18:15:11.349527    5496 main.go:141] libmachine: (multinode-108000) DBG | 2024/09/26 18:15:11 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "1fff9e18-98b5-4af0-b682-f00d5d335588", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/multinode-108000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/bzimage,/Users/jenkins/minikube-integration/1971
1-1128/.minikube/machines/multinode-108000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-108000"}
	I0926 18:15:11.349580    5496 main.go:141] libmachine: (multinode-108000) DBG | 2024/09/26 18:15:11 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 1fff9e18-98b5-4af0-b682-f00d5d335588 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/multinode-108000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/console-ring -f kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/bzimage,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-108000"
	I0926 18:15:11.349593    5496 main.go:141] libmachine: (multinode-108000) DBG | 2024/09/26 18:15:11 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0926 18:15:11.351042    5496 main.go:141] libmachine: (multinode-108000) DBG | 2024/09/26 18:15:11 DEBUG: hyperkit: Pid is 5510
	I0926 18:15:11.351416    5496 main.go:141] libmachine: (multinode-108000) DBG | Attempt 0
	I0926 18:15:11.351429    5496 main.go:141] libmachine: (multinode-108000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:15:11.351501    5496 main.go:141] libmachine: (multinode-108000) DBG | hyperkit pid from json: 5510
	I0926 18:15:11.353311    5496 main.go:141] libmachine: (multinode-108000) DBG | Searching for 6e:13:d0:11:59:38 in /var/db/dhcpd_leases ...
	I0926 18:15:11.353378    5496 main.go:141] libmachine: (multinode-108000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0926 18:15:11.353405    5496 main.go:141] libmachine: (multinode-108000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:15:11.353419    5496 main.go:141] libmachine: (multinode-108000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f75815}
	I0926 18:15:11.353427    5496 main.go:141] libmachine: (multinode-108000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f757ea}
	I0926 18:15:11.353431    5496 main.go:141] libmachine: (multinode-108000) DBG | Found match: 6e:13:d0:11:59:38
	I0926 18:15:11.353456    5496 main.go:141] libmachine: (multinode-108000) DBG | IP: 192.169.0.14
	I0926 18:15:11.353470    5496 main.go:141] libmachine: (multinode-108000) Calling .GetConfigRaw
	I0926 18:15:11.354165    5496 main.go:141] libmachine: (multinode-108000) Calling .GetIP
	I0926 18:15:11.354362    5496 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/multinode-108000/config.json ...
	I0926 18:15:11.354951    5496 machine.go:93] provisionDockerMachine start ...
	I0926 18:15:11.354961    5496 main.go:141] libmachine: (multinode-108000) Calling .DriverName
	I0926 18:15:11.355075    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHHostname
	I0926 18:15:11.355184    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHPort
	I0926 18:15:11.355302    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHKeyPath
	I0926 18:15:11.355440    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHKeyPath
	I0926 18:15:11.355538    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHUsername
	I0926 18:15:11.355681    5496 main.go:141] libmachine: Using SSH client type: native
	I0926 18:15:11.355867    5496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc5b7d00] 0xc5ba9e0 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0926 18:15:11.355875    5496 main.go:141] libmachine: About to run SSH command:
	hostname
	I0926 18:15:11.359076    5496 main.go:141] libmachine: (multinode-108000) DBG | 2024/09/26 18:15:11 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I0926 18:15:11.410801    5496 main.go:141] libmachine: (multinode-108000) DBG | 2024/09/26 18:15:11 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0926 18:15:11.411497    5496 main.go:141] libmachine: (multinode-108000) DBG | 2024/09/26 18:15:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 18:15:11.411508    5496 main.go:141] libmachine: (multinode-108000) DBG | 2024/09/26 18:15:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 18:15:11.411517    5496 main.go:141] libmachine: (multinode-108000) DBG | 2024/09/26 18:15:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 18:15:11.411525    5496 main.go:141] libmachine: (multinode-108000) DBG | 2024/09/26 18:15:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 18:15:11.796734    5496 main.go:141] libmachine: (multinode-108000) DBG | 2024/09/26 18:15:11 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0926 18:15:11.796747    5496 main.go:141] libmachine: (multinode-108000) DBG | 2024/09/26 18:15:11 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0926 18:15:11.911687    5496 main.go:141] libmachine: (multinode-108000) DBG | 2024/09/26 18:15:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 18:15:11.911703    5496 main.go:141] libmachine: (multinode-108000) DBG | 2024/09/26 18:15:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 18:15:11.911711    5496 main.go:141] libmachine: (multinode-108000) DBG | 2024/09/26 18:15:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 18:15:11.911716    5496 main.go:141] libmachine: (multinode-108000) DBG | 2024/09/26 18:15:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 18:15:11.912526    5496 main.go:141] libmachine: (multinode-108000) DBG | 2024/09/26 18:15:11 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0926 18:15:11.912534    5496 main.go:141] libmachine: (multinode-108000) DBG | 2024/09/26 18:15:11 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0926 18:15:17.511540    5496 main.go:141] libmachine: (multinode-108000) DBG | 2024/09/26 18:15:17 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0926 18:15:17.511579    5496 main.go:141] libmachine: (multinode-108000) DBG | 2024/09/26 18:15:17 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0926 18:15:17.511589    5496 main.go:141] libmachine: (multinode-108000) DBG | 2024/09/26 18:15:17 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0926 18:15:17.536373    5496 main.go:141] libmachine: (multinode-108000) DBG | 2024/09/26 18:15:17 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0926 18:15:22.427366    5496 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0926 18:15:22.427383    5496 main.go:141] libmachine: (multinode-108000) Calling .GetMachineName
	I0926 18:15:22.427531    5496 buildroot.go:166] provisioning hostname "multinode-108000"
	I0926 18:15:22.427543    5496 main.go:141] libmachine: (multinode-108000) Calling .GetMachineName
	I0926 18:15:22.427644    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHHostname
	I0926 18:15:22.427741    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHPort
	I0926 18:15:22.427847    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHKeyPath
	I0926 18:15:22.427947    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHKeyPath
	I0926 18:15:22.428065    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHUsername
	I0926 18:15:22.428207    5496 main.go:141] libmachine: Using SSH client type: native
	I0926 18:15:22.428344    5496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc5b7d00] 0xc5ba9e0 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0926 18:15:22.428351    5496 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-108000 && echo "multinode-108000" | sudo tee /etc/hostname
	I0926 18:15:22.502850    5496 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-108000
	
	I0926 18:15:22.502870    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHHostname
	I0926 18:15:22.503007    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHPort
	I0926 18:15:22.503129    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHKeyPath
	I0926 18:15:22.503213    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHKeyPath
	I0926 18:15:22.503295    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHUsername
	I0926 18:15:22.503420    5496 main.go:141] libmachine: Using SSH client type: native
	I0926 18:15:22.503564    5496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc5b7d00] 0xc5ba9e0 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0926 18:15:22.503575    5496 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-108000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-108000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-108000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0926 18:15:22.575924    5496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 18:15:22.575945    5496 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19711-1128/.minikube CaCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19711-1128/.minikube}
	I0926 18:15:22.575961    5496 buildroot.go:174] setting up certificates
	I0926 18:15:22.575967    5496 provision.go:84] configureAuth start
	I0926 18:15:22.575973    5496 main.go:141] libmachine: (multinode-108000) Calling .GetMachineName
	I0926 18:15:22.576112    5496 main.go:141] libmachine: (multinode-108000) Calling .GetIP
	I0926 18:15:22.576208    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHHostname
	I0926 18:15:22.576306    5496 provision.go:143] copyHostCerts
	I0926 18:15:22.576335    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem
	I0926 18:15:22.576404    5496 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem, removing ...
	I0926 18:15:22.576412    5496 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem
	I0926 18:15:22.576543    5496 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem (1082 bytes)
	I0926 18:15:22.576756    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem
	I0926 18:15:22.576795    5496 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem, removing ...
	I0926 18:15:22.576800    5496 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem
	I0926 18:15:22.576876    5496 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem (1123 bytes)
	I0926 18:15:22.577008    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem
	I0926 18:15:22.577045    5496 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem, removing ...
	I0926 18:15:22.577050    5496 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem
	I0926 18:15:22.577123    5496 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem (1675 bytes)
	I0926 18:15:22.577269    5496 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem org=jenkins.multinode-108000 san=[127.0.0.1 192.169.0.14 localhost minikube multinode-108000]
	I0926 18:15:22.652306    5496 provision.go:177] copyRemoteCerts
	I0926 18:15:22.652366    5496 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0926 18:15:22.652379    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHHostname
	I0926 18:15:22.652514    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHPort
	I0926 18:15:22.652639    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHKeyPath
	I0926 18:15:22.652743    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHUsername
	I0926 18:15:22.652838    5496 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/id_rsa Username:docker}
	I0926 18:15:22.692386    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0926 18:15:22.692453    5496 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0926 18:15:22.712471    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0926 18:15:22.712531    5496 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0926 18:15:22.732130    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0926 18:15:22.732186    5496 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0926 18:15:22.752204    5496 provision.go:87] duration metric: took 176.224795ms to configureAuth
	I0926 18:15:22.752216    5496 buildroot.go:189] setting minikube options for container-runtime
	I0926 18:15:22.752378    5496 config.go:182] Loaded profile config "multinode-108000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 18:15:22.752391    5496 main.go:141] libmachine: (multinode-108000) Calling .DriverName
	I0926 18:15:22.752518    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHHostname
	I0926 18:15:22.752598    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHPort
	I0926 18:15:22.752698    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHKeyPath
	I0926 18:15:22.752797    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHKeyPath
	I0926 18:15:22.752883    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHUsername
	I0926 18:15:22.753007    5496 main.go:141] libmachine: Using SSH client type: native
	I0926 18:15:22.753131    5496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc5b7d00] 0xc5ba9e0 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0926 18:15:22.753138    5496 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0926 18:15:22.818711    5496 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0926 18:15:22.818728    5496 buildroot.go:70] root file system type: tmpfs
	I0926 18:15:22.818806    5496 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0926 18:15:22.818821    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHHostname
	I0926 18:15:22.818962    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHPort
	I0926 18:15:22.819053    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHKeyPath
	I0926 18:15:22.819147    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHKeyPath
	I0926 18:15:22.819233    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHUsername
	I0926 18:15:22.819375    5496 main.go:141] libmachine: Using SSH client type: native
	I0926 18:15:22.819517    5496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc5b7d00] 0xc5ba9e0 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0926 18:15:22.819562    5496 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0926 18:15:22.895883    5496 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0926 18:15:22.895903    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHHostname
	I0926 18:15:22.896045    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHPort
	I0926 18:15:22.896139    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHKeyPath
	I0926 18:15:22.896225    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHKeyPath
	I0926 18:15:22.896304    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHUsername
	I0926 18:15:22.896467    5496 main.go:141] libmachine: Using SSH client type: native
	I0926 18:15:22.896608    5496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc5b7d00] 0xc5ba9e0 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0926 18:15:22.896620    5496 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0926 18:15:24.581382    5496 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0926 18:15:24.581401    5496 machine.go:96] duration metric: took 13.226381415s to provisionDockerMachine
	I0926 18:15:24.581411    5496 start.go:293] postStartSetup for "multinode-108000" (driver="hyperkit")
	I0926 18:15:24.581419    5496 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0926 18:15:24.581432    5496 main.go:141] libmachine: (multinode-108000) Calling .DriverName
	I0926 18:15:24.581622    5496 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0926 18:15:24.581635    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHHostname
	I0926 18:15:24.581740    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHPort
	I0926 18:15:24.581842    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHKeyPath
	I0926 18:15:24.581927    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHUsername
	I0926 18:15:24.582073    5496 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/id_rsa Username:docker}
	I0926 18:15:24.625069    5496 ssh_runner.go:195] Run: cat /etc/os-release
	I0926 18:15:24.628593    5496 command_runner.go:130] > NAME=Buildroot
	I0926 18:15:24.628608    5496 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0926 18:15:24.628612    5496 command_runner.go:130] > ID=buildroot
	I0926 18:15:24.628616    5496 command_runner.go:130] > VERSION_ID=2023.02.9
	I0926 18:15:24.628620    5496 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0926 18:15:24.628818    5496 info.go:137] Remote host: Buildroot 2023.02.9
	I0926 18:15:24.628829    5496 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1128/.minikube/addons for local assets ...
	I0926 18:15:24.628924    5496 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1128/.minikube/files for local assets ...
	I0926 18:15:24.629110    5496 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> 16792.pem in /etc/ssl/certs
	I0926 18:15:24.629117    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> /etc/ssl/certs/16792.pem
	I0926 18:15:24.629337    5496 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0926 18:15:24.639059    5496 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem --> /etc/ssl/certs/16792.pem (1708 bytes)
	I0926 18:15:24.673630    5496 start.go:296] duration metric: took 92.20968ms for postStartSetup
	I0926 18:15:24.673656    5496 fix.go:56] duration metric: took 13.510304615s for fixHost
	I0926 18:15:24.673670    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHHostname
	I0926 18:15:24.673801    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHPort
	I0926 18:15:24.673893    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHKeyPath
	I0926 18:15:24.673989    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHKeyPath
	I0926 18:15:24.674075    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHUsername
	I0926 18:15:24.674222    5496 main.go:141] libmachine: Using SSH client type: native
	I0926 18:15:24.674353    5496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc5b7d00] 0xc5ba9e0 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0926 18:15:24.674360    5496 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0926 18:15:24.738613    5496 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727399724.859428913
	
	I0926 18:15:24.738631    5496 fix.go:216] guest clock: 1727399724.859428913
	I0926 18:15:24.738636    5496 fix.go:229] Guest: 2024-09-26 18:15:24.859428913 -0700 PDT Remote: 2024-09-26 18:15:24.67366 -0700 PDT m=+13.959588443 (delta=185.768913ms)
	I0926 18:15:24.738657    5496 fix.go:200] guest clock delta is within tolerance: 185.768913ms
	I0926 18:15:24.738661    5496 start.go:83] releasing machines lock for "multinode-108000", held for 13.575343927s
	I0926 18:15:24.738678    5496 main.go:141] libmachine: (multinode-108000) Calling .DriverName
	I0926 18:15:24.738818    5496 main.go:141] libmachine: (multinode-108000) Calling .GetIP
	I0926 18:15:24.738930    5496 main.go:141] libmachine: (multinode-108000) Calling .DriverName
	I0926 18:15:24.739260    5496 main.go:141] libmachine: (multinode-108000) Calling .DriverName
	I0926 18:15:24.739368    5496 main.go:141] libmachine: (multinode-108000) Calling .DriverName
	I0926 18:15:24.739450    5496 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0926 18:15:24.739483    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHHostname
	I0926 18:15:24.739528    5496 ssh_runner.go:195] Run: cat /version.json
	I0926 18:15:24.739538    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHHostname
	I0926 18:15:24.739590    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHPort
	I0926 18:15:24.739641    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHPort
	I0926 18:15:24.739666    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHKeyPath
	I0926 18:15:24.739718    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHKeyPath
	I0926 18:15:24.739750    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHUsername
	I0926 18:15:24.739806    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHUsername
	I0926 18:15:24.739829    5496 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/id_rsa Username:docker}
	I0926 18:15:24.739890    5496 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/id_rsa Username:docker}
	I0926 18:15:24.815971    5496 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0926 18:15:24.816911    5496 command_runner.go:130] > {"iso_version": "v1.34.0-1727108440-19696", "kicbase_version": "v0.0.45-1726784731-19672", "minikube_version": "v1.34.0", "commit": "09d18ff16db81cf1cb24cd6e95f197b54c5f843c"}
	I0926 18:15:24.817118    5496 ssh_runner.go:195] Run: systemctl --version
	I0926 18:15:24.822017    5496 command_runner.go:130] > systemd 252 (252)
	I0926 18:15:24.822046    5496 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0926 18:15:24.822149    5496 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0926 18:15:24.826294    5496 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0926 18:15:24.826317    5496 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0926 18:15:24.826360    5496 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0926 18:15:24.838874    5496 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0926 18:15:24.839164    5496 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0926 18:15:24.839174    5496 start.go:495] detecting cgroup driver to use...
	I0926 18:15:24.839283    5496 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 18:15:24.854233    5496 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0926 18:15:24.854488    5496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0926 18:15:24.862837    5496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0926 18:15:24.871133    5496 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0926 18:15:24.871187    5496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0926 18:15:24.879395    5496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 18:15:24.887784    5496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0926 18:15:24.895839    5496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 18:15:24.904195    5496 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0926 18:15:24.912648    5496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0926 18:15:24.920954    5496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0926 18:15:24.929204    5496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0926 18:15:24.937448    5496 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0926 18:15:24.944910    5496 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0926 18:15:24.944934    5496 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0926 18:15:24.944973    5496 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0926 18:15:24.953505    5496 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0926 18:15:24.961687    5496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 18:15:25.073722    5496 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0926 18:15:25.092487    5496 start.go:495] detecting cgroup driver to use...
	I0926 18:15:25.092582    5496 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0926 18:15:25.107069    5496 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0926 18:15:25.107082    5496 command_runner.go:130] > [Unit]
	I0926 18:15:25.107087    5496 command_runner.go:130] > Description=Docker Application Container Engine
	I0926 18:15:25.107091    5496 command_runner.go:130] > Documentation=https://docs.docker.com
	I0926 18:15:25.107095    5496 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0926 18:15:25.107099    5496 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0926 18:15:25.107103    5496 command_runner.go:130] > StartLimitBurst=3
	I0926 18:15:25.107107    5496 command_runner.go:130] > StartLimitIntervalSec=60
	I0926 18:15:25.107110    5496 command_runner.go:130] > [Service]
	I0926 18:15:25.107114    5496 command_runner.go:130] > Type=notify
	I0926 18:15:25.107118    5496 command_runner.go:130] > Restart=on-failure
	I0926 18:15:25.107124    5496 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0926 18:15:25.107143    5496 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0926 18:15:25.107149    5496 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0926 18:15:25.107155    5496 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0926 18:15:25.107162    5496 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0926 18:15:25.107169    5496 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0926 18:15:25.107176    5496 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0926 18:15:25.107184    5496 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0926 18:15:25.107190    5496 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0926 18:15:25.107193    5496 command_runner.go:130] > ExecStart=
	I0926 18:15:25.107210    5496 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0926 18:15:25.107216    5496 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0926 18:15:25.107221    5496 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0926 18:15:25.107226    5496 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0926 18:15:25.107230    5496 command_runner.go:130] > LimitNOFILE=infinity
	I0926 18:15:25.107233    5496 command_runner.go:130] > LimitNPROC=infinity
	I0926 18:15:25.107237    5496 command_runner.go:130] > LimitCORE=infinity
	I0926 18:15:25.107241    5496 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0926 18:15:25.107246    5496 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0926 18:15:25.107249    5496 command_runner.go:130] > TasksMax=infinity
	I0926 18:15:25.107253    5496 command_runner.go:130] > TimeoutStartSec=0
	I0926 18:15:25.107259    5496 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0926 18:15:25.107263    5496 command_runner.go:130] > Delegate=yes
	I0926 18:15:25.107268    5496 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0926 18:15:25.107272    5496 command_runner.go:130] > KillMode=process
	I0926 18:15:25.107277    5496 command_runner.go:130] > [Install]
	I0926 18:15:25.107287    5496 command_runner.go:130] > WantedBy=multi-user.target
	I0926 18:15:25.107365    5496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 18:15:25.120217    5496 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0926 18:15:25.133987    5496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 18:15:25.144928    5496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 18:15:25.155787    5496 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0926 18:15:25.176283    5496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 18:15:25.187021    5496 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 18:15:25.201439    5496 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0926 18:15:25.201678    5496 ssh_runner.go:195] Run: which cri-dockerd
	I0926 18:15:25.204469    5496 command_runner.go:130] > /usr/bin/cri-dockerd
	I0926 18:15:25.204594    5496 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0926 18:15:25.211668    5496 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0926 18:15:25.225438    5496 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0926 18:15:25.328498    5496 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0926 18:15:25.435484    5496 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0926 18:15:25.435549    5496 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0926 18:15:25.449569    5496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 18:15:25.550403    5496 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0926 18:15:27.893151    5496 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.342719676s)
	I0926 18:15:27.893221    5496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0926 18:15:27.905045    5496 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0926 18:15:27.918823    5496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0926 18:15:27.929932    5496 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0926 18:15:28.032246    5496 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0926 18:15:28.137978    5496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 18:15:28.251312    5496 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0926 18:15:28.264994    5496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0926 18:15:28.275886    5496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 18:15:28.366478    5496 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0926 18:15:28.423109    5496 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0926 18:15:28.423205    5496 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0926 18:15:28.427642    5496 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0926 18:15:28.427654    5496 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0926 18:15:28.427658    5496 command_runner.go:130] > Device: 0,22	Inode: 762         Links: 1
	I0926 18:15:28.427664    5496 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0926 18:15:28.427668    5496 command_runner.go:130] > Access: 2024-09-27 01:15:28.500999470 +0000
	I0926 18:15:28.427672    5496 command_runner.go:130] > Modify: 2024-09-27 01:15:28.500999470 +0000
	I0926 18:15:28.427677    5496 command_runner.go:130] > Change: 2024-09-27 01:15:28.502999351 +0000
	I0926 18:15:28.427680    5496 command_runner.go:130] >  Birth: -
	I0926 18:15:28.427896    5496 start.go:563] Will wait 60s for crictl version
	I0926 18:15:28.427954    5496 ssh_runner.go:195] Run: which crictl
	I0926 18:15:28.431063    5496 command_runner.go:130] > /usr/bin/crictl
	I0926 18:15:28.431194    5496 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0926 18:15:28.458785    5496 command_runner.go:130] > Version:  0.1.0
	I0926 18:15:28.458798    5496 command_runner.go:130] > RuntimeName:  docker
	I0926 18:15:28.458802    5496 command_runner.go:130] > RuntimeVersion:  27.3.1
	I0926 18:15:28.458807    5496 command_runner.go:130] > RuntimeApiVersion:  v1
	I0926 18:15:28.459669    5496 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I0926 18:15:28.459770    5496 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0926 18:15:28.475658    5496 command_runner.go:130] > 27.3.1
	I0926 18:15:28.475784    5496 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0926 18:15:28.492767    5496 command_runner.go:130] > 27.3.1
	I0926 18:15:28.537040    5496 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I0926 18:15:28.537086    5496 main.go:141] libmachine: (multinode-108000) Calling .GetIP
	I0926 18:15:28.537491    5496 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0926 18:15:28.541787    5496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 18:15:28.551380    5496 kubeadm.go:883] updating cluster {Name:multinode-108000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.31.1 ClusterName:multinode-108000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.15 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb
:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0926 18:15:28.551476    5496 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 18:15:28.551556    5496 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0926 18:15:28.563909    5496 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.1
	I0926 18:15:28.563923    5496 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.1
	I0926 18:15:28.563927    5496 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.1
	I0926 18:15:28.563931    5496 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.1
	I0926 18:15:28.563935    5496 command_runner.go:130] > kindest/kindnetd:v20240813-c6f155d6
	I0926 18:15:28.563939    5496 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0926 18:15:28.563955    5496 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I0926 18:15:28.563959    5496 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0926 18:15:28.563968    5496 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 18:15:28.563972    5496 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0926 18:15:28.564481    5496 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0926 18:15:28.564490    5496 docker.go:615] Images already preloaded, skipping extraction
	I0926 18:15:28.564573    5496 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0926 18:15:28.575924    5496 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.1
	I0926 18:15:28.575939    5496 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.1
	I0926 18:15:28.575943    5496 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.1
	I0926 18:15:28.575947    5496 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.1
	I0926 18:15:28.575951    5496 command_runner.go:130] > kindest/kindnetd:v20240813-c6f155d6
	I0926 18:15:28.575954    5496 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0926 18:15:28.575960    5496 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I0926 18:15:28.575965    5496 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0926 18:15:28.575969    5496 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 18:15:28.575973    5496 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0926 18:15:28.576659    5496 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0926 18:15:28.576678    5496 cache_images.go:84] Images are preloaded, skipping loading
	I0926 18:15:28.576688    5496 kubeadm.go:934] updating node { 192.169.0.14 8443 v1.31.1 docker true true} ...
	I0926 18:15:28.576773    5496 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-108000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.14
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-108000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0926 18:15:28.576856    5496 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0926 18:15:28.611238    5496 command_runner.go:130] > cgroupfs
	I0926 18:15:28.611772    5496 cni.go:84] Creating CNI manager for ""
	I0926 18:15:28.611782    5496 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0926 18:15:28.611793    5496 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0926 18:15:28.611808    5496 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.14 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-108000 NodeName:multinode-108000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.14"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.14 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0926 18:15:28.611887    5496 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.14
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-108000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.14
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.14"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0926 18:15:28.611968    5496 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0926 18:15:28.619541    5496 command_runner.go:130] > kubeadm
	I0926 18:15:28.619547    5496 command_runner.go:130] > kubectl
	I0926 18:15:28.619550    5496 command_runner.go:130] > kubelet
	I0926 18:15:28.619657    5496 binaries.go:44] Found k8s binaries, skipping transfer
	I0926 18:15:28.619706    5496 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0926 18:15:28.626911    5496 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0926 18:15:28.640341    5496 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0926 18:15:28.653661    5496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0926 18:15:28.667133    5496 ssh_runner.go:195] Run: grep 192.169.0.14	control-plane.minikube.internal$ /etc/hosts
	I0926 18:15:28.670052    5496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.14	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 18:15:28.679370    5496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 18:15:28.776594    5496 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 18:15:28.791198    5496 certs.go:68] Setting up /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/multinode-108000 for IP: 192.169.0.14
	I0926 18:15:28.791211    5496 certs.go:194] generating shared ca certs ...
	I0926 18:15:28.791222    5496 certs.go:226] acquiring lock for ca certs: {Name:mk3d4665cb5756ae49ea6f7cd2a58d273aecc507 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 18:15:28.791411    5496 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.key
	I0926 18:15:28.791491    5496 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.key
	I0926 18:15:28.791502    5496 certs.go:256] generating profile certs ...
	I0926 18:15:28.791596    5496 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/multinode-108000/client.key
	I0926 18:15:28.791675    5496 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/multinode-108000/apiserver.key.1450c8f5
	I0926 18:15:28.791743    5496 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/multinode-108000/proxy-client.key
	I0926 18:15:28.791750    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0926 18:15:28.791771    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0926 18:15:28.791788    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0926 18:15:28.791805    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0926 18:15:28.791824    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/multinode-108000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0926 18:15:28.791851    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/multinode-108000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0926 18:15:28.791887    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/multinode-108000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0926 18:15:28.791906    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/multinode-108000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0926 18:15:28.792003    5496 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679.pem (1338 bytes)
	W0926 18:15:28.792051    5496 certs.go:480] ignoring /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679_empty.pem, impossibly tiny 0 bytes
	I0926 18:15:28.792065    5496 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem (1675 bytes)
	I0926 18:15:28.792095    5496 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem (1082 bytes)
	I0926 18:15:28.792128    5496 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem (1123 bytes)
	I0926 18:15:28.792160    5496 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem (1675 bytes)
	I0926 18:15:28.792231    5496 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem (1708 bytes)
	I0926 18:15:28.792268    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679.pem -> /usr/share/ca-certificates/1679.pem
	I0926 18:15:28.792294    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> /usr/share/ca-certificates/16792.pem
	I0926 18:15:28.792313    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0926 18:15:28.792769    5496 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0926 18:15:28.824491    5496 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0926 18:15:28.858920    5496 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0926 18:15:28.884029    5496 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0926 18:15:28.907578    5496 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/multinode-108000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0926 18:15:28.927328    5496 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/multinode-108000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0926 18:15:28.947177    5496 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/multinode-108000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0926 18:15:28.967268    5496 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/multinode-108000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0926 18:15:28.987093    5496 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679.pem --> /usr/share/ca-certificates/1679.pem (1338 bytes)
	I0926 18:15:29.007110    5496 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem --> /usr/share/ca-certificates/16792.pem (1708 bytes)
	I0926 18:15:29.026731    5496 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0926 18:15:29.046322    5496 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0926 18:15:29.059808    5496 ssh_runner.go:195] Run: openssl version
	I0926 18:15:29.063793    5496 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0926 18:15:29.063977    5496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0926 18:15:29.072344    5496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0926 18:15:29.075691    5496 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 27 00:14 /usr/share/ca-certificates/minikubeCA.pem
	I0926 18:15:29.075791    5496 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:14 /usr/share/ca-certificates/minikubeCA.pem
	I0926 18:15:29.075833    5496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0926 18:15:29.079940    5496 command_runner.go:130] > b5213941
	I0926 18:15:29.080071    5496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0926 18:15:29.088331    5496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1679.pem && ln -fs /usr/share/ca-certificates/1679.pem /etc/ssl/certs/1679.pem"
	I0926 18:15:29.096643    5496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1679.pem
	I0926 18:15:29.099914    5496 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 27 00:36 /usr/share/ca-certificates/1679.pem
	I0926 18:15:29.100043    5496 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:36 /usr/share/ca-certificates/1679.pem
	I0926 18:15:29.100083    5496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1679.pem
	I0926 18:15:29.104211    5496 command_runner.go:130] > 51391683
	I0926 18:15:29.104328    5496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1679.pem /etc/ssl/certs/51391683.0"
	I0926 18:15:29.112463    5496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16792.pem && ln -fs /usr/share/ca-certificates/16792.pem /etc/ssl/certs/16792.pem"
	I0926 18:15:29.120800    5496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16792.pem
	I0926 18:15:29.124144    5496 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 27 00:36 /usr/share/ca-certificates/16792.pem
	I0926 18:15:29.124244    5496 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:36 /usr/share/ca-certificates/16792.pem
	I0926 18:15:29.124298    5496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16792.pem
	I0926 18:15:29.128361    5496 command_runner.go:130] > 3ec20f2e
	I0926 18:15:29.128595    5496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16792.pem /etc/ssl/certs/3ec20f2e.0"
	I0926 18:15:29.136915    5496 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0926 18:15:29.140253    5496 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0926 18:15:29.140267    5496 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0926 18:15:29.140272    5496 command_runner.go:130] > Device: 253,1	Inode: 529437      Links: 1
	I0926 18:15:29.140277    5496 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0926 18:15:29.140283    5496 command_runner.go:130] > Access: 2024-09-27 01:12:19.505817222 +0000
	I0926 18:15:29.140287    5496 command_runner.go:130] > Modify: 2024-09-27 01:08:44.822156699 +0000
	I0926 18:15:29.140295    5496 command_runner.go:130] > Change: 2024-09-27 01:08:44.822156699 +0000
	I0926 18:15:29.140301    5496 command_runner.go:130] >  Birth: 2024-09-27 01:08:44.822156699 +0000
	I0926 18:15:29.140414    5496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0926 18:15:29.144643    5496 command_runner.go:130] > Certificate will not expire
	I0926 18:15:29.144777    5496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0926 18:15:29.148962    5496 command_runner.go:130] > Certificate will not expire
	I0926 18:15:29.149056    5496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0926 18:15:29.153170    5496 command_runner.go:130] > Certificate will not expire
	I0926 18:15:29.153336    5496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0926 18:15:29.157522    5496 command_runner.go:130] > Certificate will not expire
	I0926 18:15:29.157678    5496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0926 18:15:29.161829    5496 command_runner.go:130] > Certificate will not expire
	I0926 18:15:29.161978    5496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0926 18:15:29.166239    5496 command_runner.go:130] > Certificate will not expire
	I0926 18:15:29.166368    5496 kubeadm.go:392] StartCluster: {Name:multinode-108000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:multinode-108000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.15 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:fa
lse metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAge
ntPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 18:15:29.166498    5496 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0926 18:15:29.181724    5496 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0926 18:15:29.189178    5496 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0926 18:15:29.189188    5496 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0926 18:15:29.189195    5496 command_runner.go:130] > /var/lib/minikube/etcd:
	I0926 18:15:29.189199    5496 command_runner.go:130] > member
	I0926 18:15:29.189263    5496 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0926 18:15:29.189272    5496 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0926 18:15:29.189316    5496 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0926 18:15:29.196536    5496 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0926 18:15:29.196843    5496 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-108000" does not appear in /Users/jenkins/minikube-integration/19711-1128/kubeconfig
	I0926 18:15:29.196935    5496 kubeconfig.go:62] /Users/jenkins/minikube-integration/19711-1128/kubeconfig needs updating (will repair): [kubeconfig missing "multinode-108000" cluster setting kubeconfig missing "multinode-108000" context setting]
	I0926 18:15:29.197123    5496 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1128/kubeconfig: {Name:mkb9c069c578c490d8cb734b05a21821e9215482 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 18:15:29.197689    5496 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19711-1128/kubeconfig
	I0926 18:15:29.197890    5496 kapi.go:59] client config for multinode-108000: &rest.Config{Host:"https://192.169.0.14:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/multinode-108000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/multinode-108000/client.key", CAFile:"/Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xdc8df00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0926 18:15:29.198213    5496 cert_rotation.go:140] Starting client certificate rotation controller
	I0926 18:15:29.198396    5496 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0926 18:15:29.205621    5496 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.14
	I0926 18:15:29.205636    5496 kubeadm.go:1160] stopping kube-system containers ...
	I0926 18:15:29.205706    5496 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0926 18:15:29.222337    5496 command_runner.go:130] > c5d1e02f3410
	I0926 18:15:29.222349    5496 command_runner.go:130] > 264e74b184f3
	I0926 18:15:29.222353    5496 command_runner.go:130] > ae6756186a89
	I0926 18:15:29.222357    5496 command_runner.go:130] > aa5128e84e3c
	I0926 18:15:29.222360    5496 command_runner.go:130] > 24f91ce476a0
	I0926 18:15:29.222364    5496 command_runner.go:130] > 67dac98df54b
	I0926 18:15:29.222376    5496 command_runner.go:130] > d28db07575ac
	I0926 18:15:29.222380    5496 command_runner.go:130] > 9aa764225ca3
	I0926 18:15:29.222384    5496 command_runner.go:130] > 6c14e4e50817
	I0926 18:15:29.222387    5496 command_runner.go:130] > 0b00cd940822
	I0926 18:15:29.222390    5496 command_runner.go:130] > 96b13fc13d92
	I0926 18:15:29.222394    5496 command_runner.go:130] > e8c9a9508a99
	I0926 18:15:29.222397    5496 command_runner.go:130] > e8ecb49c95ed
	I0926 18:15:29.222424    5496 command_runner.go:130] > 0e2ed0aa0566
	I0926 18:15:29.222431    5496 command_runner.go:130] > 0d2737b4b446
	I0926 18:15:29.222435    5496 command_runner.go:130] > e4d5b4323b94
	I0926 18:15:29.222438    5496 command_runner.go:130] > 700ba38f29cd
	I0926 18:15:29.222441    5496 command_runner.go:130] > 1f9a87a7d94b
	I0926 18:15:29.222446    5496 command_runner.go:130] > bd18faf8df7e
	I0926 18:15:29.222449    5496 command_runner.go:130] > 819f06ad9f8f
	I0926 18:15:29.222452    5496 command_runner.go:130] > 7e18c6962c7e
	I0926 18:15:29.222456    5496 command_runner.go:130] > 1405f38eef7c
	I0926 18:15:29.222459    5496 command_runner.go:130] > 0bab0a59e548
	I0926 18:15:29.222462    5496 command_runner.go:130] > 5fe6f666077c
	I0926 18:15:29.222465    5496 command_runner.go:130] > 51a6a22182a5
	I0926 18:15:29.222476    5496 command_runner.go:130] > 9b970bc21b00
	I0926 18:15:29.222480    5496 command_runner.go:130] > 73d594bc25b2
	I0926 18:15:29.222484    5496 command_runner.go:130] > dab704818c00
	I0926 18:15:29.222487    5496 command_runner.go:130] > 63266cd7525c
	I0926 18:15:29.222491    5496 command_runner.go:130] > a111425be00e
	I0926 18:15:29.222493    5496 command_runner.go:130] > 61ef59d75417
	I0926 18:15:29.222513    5496 docker.go:483] Stopping containers: [c5d1e02f3410 264e74b184f3 ae6756186a89 aa5128e84e3c 24f91ce476a0 67dac98df54b d28db07575ac 9aa764225ca3 6c14e4e50817 0b00cd940822 96b13fc13d92 e8c9a9508a99 e8ecb49c95ed 0e2ed0aa0566 0d2737b4b446 e4d5b4323b94 700ba38f29cd 1f9a87a7d94b bd18faf8df7e 819f06ad9f8f 7e18c6962c7e 1405f38eef7c 0bab0a59e548 5fe6f666077c 51a6a22182a5 9b970bc21b00 73d594bc25b2 dab704818c00 63266cd7525c a111425be00e 61ef59d75417]
	I0926 18:15:29.222596    5496 ssh_runner.go:195] Run: docker stop c5d1e02f3410 264e74b184f3 ae6756186a89 aa5128e84e3c 24f91ce476a0 67dac98df54b d28db07575ac 9aa764225ca3 6c14e4e50817 0b00cd940822 96b13fc13d92 e8c9a9508a99 e8ecb49c95ed 0e2ed0aa0566 0d2737b4b446 e4d5b4323b94 700ba38f29cd 1f9a87a7d94b bd18faf8df7e 819f06ad9f8f 7e18c6962c7e 1405f38eef7c 0bab0a59e548 5fe6f666077c 51a6a22182a5 9b970bc21b00 73d594bc25b2 dab704818c00 63266cd7525c a111425be00e 61ef59d75417
	I0926 18:15:29.238263    5496 command_runner.go:130] > c5d1e02f3410
	I0926 18:15:29.238275    5496 command_runner.go:130] > 264e74b184f3
	I0926 18:15:29.238279    5496 command_runner.go:130] > ae6756186a89
	I0926 18:15:29.238282    5496 command_runner.go:130] > aa5128e84e3c
	I0926 18:15:29.238286    5496 command_runner.go:130] > 24f91ce476a0
	I0926 18:15:29.238289    5496 command_runner.go:130] > 67dac98df54b
	I0926 18:15:29.238300    5496 command_runner.go:130] > d28db07575ac
	I0926 18:15:29.238313    5496 command_runner.go:130] > 9aa764225ca3
	I0926 18:15:29.238318    5496 command_runner.go:130] > 6c14e4e50817
	I0926 18:15:29.238323    5496 command_runner.go:130] > 0b00cd940822
	I0926 18:15:29.238326    5496 command_runner.go:130] > 96b13fc13d92
	I0926 18:15:29.238329    5496 command_runner.go:130] > e8c9a9508a99
	I0926 18:15:29.238332    5496 command_runner.go:130] > e8ecb49c95ed
	I0926 18:15:29.238336    5496 command_runner.go:130] > 0e2ed0aa0566
	I0926 18:15:29.238341    5496 command_runner.go:130] > 0d2737b4b446
	I0926 18:15:29.238346    5496 command_runner.go:130] > e4d5b4323b94
	I0926 18:15:29.238349    5496 command_runner.go:130] > 700ba38f29cd
	I0926 18:15:29.238353    5496 command_runner.go:130] > 1f9a87a7d94b
	I0926 18:15:29.238356    5496 command_runner.go:130] > bd18faf8df7e
	I0926 18:15:29.238359    5496 command_runner.go:130] > 819f06ad9f8f
	I0926 18:15:29.238362    5496 command_runner.go:130] > 7e18c6962c7e
	I0926 18:15:29.238367    5496 command_runner.go:130] > 1405f38eef7c
	I0926 18:15:29.238370    5496 command_runner.go:130] > 0bab0a59e548
	I0926 18:15:29.238373    5496 command_runner.go:130] > 5fe6f666077c
	I0926 18:15:29.238377    5496 command_runner.go:130] > 51a6a22182a5
	I0926 18:15:29.238380    5496 command_runner.go:130] > 9b970bc21b00
	I0926 18:15:29.238388    5496 command_runner.go:130] > 73d594bc25b2
	I0926 18:15:29.238392    5496 command_runner.go:130] > dab704818c00
	I0926 18:15:29.238395    5496 command_runner.go:130] > 63266cd7525c
	I0926 18:15:29.238398    5496 command_runner.go:130] > a111425be00e
	I0926 18:15:29.238403    5496 command_runner.go:130] > 61ef59d75417
	I0926 18:15:29.238471    5496 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0926 18:15:29.250881    5496 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0926 18:15:29.258275    5496 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0926 18:15:29.258286    5496 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0926 18:15:29.258293    5496 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0926 18:15:29.258310    5496 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0926 18:15:29.258381    5496 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0926 18:15:29.258389    5496 kubeadm.go:157] found existing configuration files:
	
	I0926 18:15:29.258433    5496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0926 18:15:29.265480    5496 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0926 18:15:29.265501    5496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0926 18:15:29.265542    5496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0926 18:15:29.272854    5496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0926 18:15:29.279864    5496 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0926 18:15:29.279879    5496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0926 18:15:29.279921    5496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0926 18:15:29.287366    5496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0926 18:15:29.294234    5496 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0926 18:15:29.294250    5496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0926 18:15:29.294289    5496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0926 18:15:29.301488    5496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0926 18:15:29.308467    5496 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0926 18:15:29.308484    5496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0926 18:15:29.308528    5496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0926 18:15:29.315784    5496 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0926 18:15:29.323262    5496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0926 18:15:29.387035    5496 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0926 18:15:29.387208    5496 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0926 18:15:29.387366    5496 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0926 18:15:29.387491    5496 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0926 18:15:29.387741    5496 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0926 18:15:29.387801    5496 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0926 18:15:29.388148    5496 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0926 18:15:29.388270    5496 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0926 18:15:29.388465    5496 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0926 18:15:29.388551    5496 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0926 18:15:29.388699    5496 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0926 18:15:29.388894    5496 command_runner.go:130] > [certs] Using the existing "sa" key
	I0926 18:15:29.389752    5496 command_runner.go:130] ! W0927 01:15:29.506652    1381 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0926 18:15:29.389777    5496 command_runner.go:130] ! W0927 01:15:29.507154    1381 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0926 18:15:29.389850    5496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0926 18:15:29.423060    5496 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0926 18:15:29.547278    5496 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0926 18:15:29.932742    5496 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0926 18:15:30.080632    5496 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0926 18:15:30.279123    5496 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0926 18:15:30.476307    5496 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0926 18:15:30.478496    5496 command_runner.go:130] ! W0927 01:15:29.545360    1386 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0926 18:15:30.478520    5496 command_runner.go:130] ! W0927 01:15:29.545862    1386 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0926 18:15:30.478535    5496 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.088664848s)
	I0926 18:15:30.478548    5496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0926 18:15:30.523419    5496 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0926 18:15:30.528646    5496 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0926 18:15:30.528655    5496 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0926 18:15:30.631719    5496 command_runner.go:130] ! W0927 01:15:30.633706    1391 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0926 18:15:30.631742    5496 command_runner.go:130] ! W0927 01:15:30.634198    1391 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0926 18:15:30.631757    5496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0926 18:15:30.683673    5496 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0926 18:15:30.683688    5496 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0926 18:15:30.685442    5496 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0926 18:15:30.686081    5496 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0926 18:15:30.688156    5496 command_runner.go:130] ! W0927 01:15:30.800041    1419 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0926 18:15:30.688173    5496 command_runner.go:130] ! W0927 01:15:30.800677    1419 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0926 18:15:30.688339    5496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0926 18:15:30.744560    5496 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0926 18:15:30.750301    5496 command_runner.go:130] ! W0927 01:15:30.864962    1425 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0926 18:15:30.750323    5496 command_runner.go:130] ! W0927 01:15:30.865788    1425 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0926 18:15:30.750347    5496 api_server.go:52] waiting for apiserver process to appear ...
	I0926 18:15:30.750432    5496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 18:15:31.252589    5496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 18:15:31.752671    5496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 18:15:32.251547    5496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 18:15:32.752318    5496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 18:15:32.763497    5496 command_runner.go:130] > 1714
	I0926 18:15:32.763640    5496 api_server.go:72] duration metric: took 2.01328698s to wait for apiserver process to appear ...
	I0926 18:15:32.763651    5496 api_server.go:88] waiting for apiserver healthz status ...
	I0926 18:15:32.763667    5496 api_server.go:253] Checking apiserver healthz at https://192.169.0.14:8443/healthz ...
	I0926 18:15:35.166029    5496 api_server.go:279] https://192.169.0.14:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0926 18:15:35.166046    5496 api_server.go:103] status: https://192.169.0.14:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0926 18:15:35.166056    5496 api_server.go:253] Checking apiserver healthz at https://192.169.0.14:8443/healthz ...
	I0926 18:15:35.182853    5496 api_server.go:279] https://192.169.0.14:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0926 18:15:35.182869    5496 api_server.go:103] status: https://192.169.0.14:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0926 18:15:35.264983    5496 api_server.go:253] Checking apiserver healthz at https://192.169.0.14:8443/healthz ...
	I0926 18:15:35.270010    5496 api_server.go:279] https://192.169.0.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0926 18:15:35.270024    5496 api_server.go:103] status: https://192.169.0.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0926 18:15:35.764546    5496 api_server.go:253] Checking apiserver healthz at https://192.169.0.14:8443/healthz ...
	I0926 18:15:35.769975    5496 api_server.go:279] https://192.169.0.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0926 18:15:35.769990    5496 api_server.go:103] status: https://192.169.0.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0926 18:15:36.264338    5496 api_server.go:253] Checking apiserver healthz at https://192.169.0.14:8443/healthz ...
	I0926 18:15:36.269447    5496 api_server.go:279] https://192.169.0.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0926 18:15:36.269463    5496 api_server.go:103] status: https://192.169.0.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0926 18:15:36.764566    5496 api_server.go:253] Checking apiserver healthz at https://192.169.0.14:8443/healthz ...
	I0926 18:15:36.768587    5496 api_server.go:279] https://192.169.0.14:8443/healthz returned 200:
	ok
	I0926 18:15:36.768646    5496 round_trippers.go:463] GET https://192.169.0.14:8443/version
	I0926 18:15:36.768652    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:36.768660    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:36.768664    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:36.776855    5496 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0926 18:15:36.776867    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:36.776872    5496 round_trippers.go:580]     Content-Length: 263
	I0926 18:15:36.776875    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:36 GMT
	I0926 18:15:36.776878    5496 round_trippers.go:580]     Audit-Id: 97e34db1-7a8c-4e7f-a5b0-6b08911b79fa
	I0926 18:15:36.776886    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:36.776889    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:36.776892    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:36.776894    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:36.776914    5496 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.1",
	  "gitCommit": "948afe5ca072329a73c8e79ed5938717a5cb3d21",
	  "gitTreeState": "clean",
	  "buildDate": "2024-09-11T21:22:08Z",
	  "goVersion": "go1.22.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0926 18:15:36.776968    5496 api_server.go:141] control plane version: v1.31.1
	I0926 18:15:36.776978    5496 api_server.go:131] duration metric: took 4.013304806s to wait for apiserver health ...
	I0926 18:15:36.776984    5496 cni.go:84] Creating CNI manager for ""
	I0926 18:15:36.776988    5496 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0926 18:15:36.800999    5496 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0926 18:15:36.821520    5496 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0926 18:15:36.825303    5496 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0926 18:15:36.825319    5496 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0926 18:15:36.825328    5496 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0926 18:15:36.825337    5496 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0926 18:15:36.825344    5496 command_runner.go:130] > Access: 2024-09-27 01:15:21.669218995 +0000
	I0926 18:15:36.825351    5496 command_runner.go:130] > Modify: 2024-09-23 21:47:52.000000000 +0000
	I0926 18:15:36.825359    5496 command_runner.go:130] > Change: 2024-09-27 01:15:19.118121505 +0000
	I0926 18:15:36.825366    5496 command_runner.go:130] >  Birth: -
	I0926 18:15:36.825580    5496 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0926 18:15:36.825588    5496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0926 18:15:36.846746    5496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0926 18:15:37.350952    5496 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0926 18:15:37.350967    5496 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0926 18:15:37.350971    5496 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0926 18:15:37.350975    5496 command_runner.go:130] > daemonset.apps/kindnet configured
	I0926 18:15:37.351043    5496 system_pods.go:43] waiting for kube-system pods to appear ...
	I0926 18:15:37.351083    5496 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0926 18:15:37.351093    5496 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0926 18:15:37.351136    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods
	I0926 18:15:37.351141    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:37.351147    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:37.351151    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:37.354427    5496 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 18:15:37.354438    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:37.354444    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:37.354447    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:37 GMT
	I0926 18:15:37.354450    5496 round_trippers.go:580]     Audit-Id: 4fcb048c-626e-4471-a508-621f2f1c02c6
	I0926 18:15:37.354452    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:37.354454    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:37.354457    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:37.355283    5496 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1221"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 89950 chars]
	I0926 18:15:37.359708    5496 system_pods.go:59] 12 kube-system pods found
	I0926 18:15:37.359733    5496 system_pods.go:61] "coredns-7c65d6cfc9-hxdhm" [ff9bbfa0-9278-44d7-abc5-7a38ed77ce23] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 18:15:37.359739    5496 system_pods.go:61] "etcd-multinode-108000" [2a5e99f4-416d-4d75-acd2-33231f5f780d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 18:15:37.359744    5496 system_pods.go:61] "kindnet-ktwmw" [5065643a-e9ee-44a6-a05d-b9154074dd84] Running
	I0926 18:15:37.359747    5496 system_pods.go:61] "kindnet-qlv2x" [08c7f9d2-c689-40b5-95fc-a48157150778] Running
	I0926 18:15:37.359750    5496 system_pods.go:61] "kindnet-wbk29" [a9ff7c3f-b5e1-40e5-ab9d-a38e2696988f] Running
	I0926 18:15:37.359754    5496 system_pods.go:61] "kube-apiserver-multinode-108000" [b8011715-128c-4dfc-94b7-cc9c04907c8a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0926 18:15:37.359759    5496 system_pods.go:61] "kube-controller-manager-multinode-108000" [42fac17d-5eda-41e8-8747-902b605e747f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 18:15:37.359763    5496 system_pods.go:61] "kube-proxy-9kjdl" [979606a2-6bc4-46c0-8333-000bc25722f3] Running
	I0926 18:15:37.359765    5496 system_pods.go:61] "kube-proxy-ngs2x" [f95c0316-b4a8-4f0c-a90b-a88af50fbc68] Running
	I0926 18:15:37.359768    5496 system_pods.go:61] "kube-proxy-pwrqj" [dfc98f0e-705d-41fd-a871-9d4f8455b11d] Running
	I0926 18:15:37.359771    5496 system_pods.go:61] "kube-scheduler-multinode-108000" [e5b482e0-154d-4620-8f24-1ebf181b9c1b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 18:15:37.359775    5496 system_pods.go:61] "storage-provisioner" [e67377e5-f7c5-4625-9739-3703de1f4739] Running
	I0926 18:15:37.359779    5496 system_pods.go:74] duration metric: took 8.729378ms to wait for pod list to return data ...
	I0926 18:15:37.359786    5496 node_conditions.go:102] verifying NodePressure condition ...
	I0926 18:15:37.359823    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes
	I0926 18:15:37.359828    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:37.359833    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:37.359837    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:37.361829    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:15:37.361838    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:37.361846    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:37 GMT
	I0926 18:15:37.361852    5496 round_trippers.go:580]     Audit-Id: f5aa31fb-2428-47cc-8347-f9410728b8bd
	I0926 18:15:37.361859    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:37.361864    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:37.361868    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:37.361874    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:37.362111    5496 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1221"},"items":[{"metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1203","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 10158 chars]
	I0926 18:15:37.362544    5496 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0926 18:15:37.362558    5496 node_conditions.go:123] node cpu capacity is 2
	I0926 18:15:37.362568    5496 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0926 18:15:37.362578    5496 node_conditions.go:123] node cpu capacity is 2
	I0926 18:15:37.362582    5496 node_conditions.go:105] duration metric: took 2.793131ms to run NodePressure ...
	I0926 18:15:37.362592    5496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0926 18:15:37.465696    5496 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0926 18:15:37.619472    5496 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0926 18:15:37.620493    5496 command_runner.go:130] ! W0927 01:15:37.537959    2228 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0926 18:15:37.620510    5496 command_runner.go:130] ! W0927 01:15:37.538519    2228 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0926 18:15:37.620527    5496 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0926 18:15:37.620589    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0926 18:15:37.620595    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:37.620601    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:37.620605    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:37.622467    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:15:37.622496    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:37.622502    5496 round_trippers.go:580]     Audit-Id: e456332e-d319-45cc-b7b7-7af6bdadd549
	I0926 18:15:37.622506    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:37.622509    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:37.622511    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:37.622514    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:37.622518    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:37 GMT
	I0926 18:15:37.622818    5496 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1223"},"items":[{"metadata":{"name":"etcd-multinode-108000","namespace":"kube-system","uid":"2a5e99f4-416d-4d75-acd2-33231f5f780d","resourceVersion":"1206","creationTimestamp":"2024-09-27T01:08:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.14:2379","kubernetes.io/config.hash":"416a07783603924d51ef5ca1abc7c318","kubernetes.io/config.mirror":"416a07783603924d51ef5ca1abc7c318","kubernetes.io/config.seen":"2024-09-27T01:08:53.027445649Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 31223 chars]
	I0926 18:15:37.623551    5496 kubeadm.go:739] kubelet initialised
	I0926 18:15:37.623561    5496 kubeadm.go:740] duration metric: took 3.026723ms waiting for restarted kubelet to initialise ...
	I0926 18:15:37.623568    5496 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0926 18:15:37.623599    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods
	I0926 18:15:37.623604    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:37.623610    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:37.623614    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:37.625151    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:15:37.625158    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:37.625165    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:37.625168    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:37.625174    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:37 GMT
	I0926 18:15:37.625179    5496 round_trippers.go:580]     Audit-Id: 3921b1e0-be8a-4be0-b8ec-28e8ca02b5d7
	I0926 18:15:37.625183    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:37.625187    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:37.625889    5496 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1223"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 89950 chars]
	I0926 18:15:37.627833    5496 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-hxdhm" in "kube-system" namespace to be "Ready" ...
	I0926 18:15:37.627880    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:15:37.627885    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:37.627891    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:37.627896    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:37.629167    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:15:37.629174    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:37.629178    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:37.629182    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:37.629185    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:37 GMT
	I0926 18:15:37.629188    5496 round_trippers.go:580]     Audit-Id: bf8c117e-157f-4745-a8ed-a8c3ab5e3832
	I0926 18:15:37.629190    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:37.629193    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:37.629485    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:15:37.629739    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:37.629746    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:37.629753    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:37.629758    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:37.631130    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:15:37.631136    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:37.631141    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:37.631143    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:37.631147    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:37 GMT
	I0926 18:15:37.631149    5496 round_trippers.go:580]     Audit-Id: 5dfbf840-3705-44b6-b981-b1a6c84753e7
	I0926 18:15:37.631152    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:37.631155    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:37.631299    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1203","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5300 chars]
	I0926 18:15:37.631481    5496 pod_ready.go:98] node "multinode-108000" hosting pod "coredns-7c65d6cfc9-hxdhm" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-108000" has status "Ready":"False"
	I0926 18:15:37.631491    5496 pod_ready.go:82] duration metric: took 3.649288ms for pod "coredns-7c65d6cfc9-hxdhm" in "kube-system" namespace to be "Ready" ...
	E0926 18:15:37.631497    5496 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-108000" hosting pod "coredns-7c65d6cfc9-hxdhm" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-108000" has status "Ready":"False"
	I0926 18:15:37.631503    5496 pod_ready.go:79] waiting up to 4m0s for pod "etcd-multinode-108000" in "kube-system" namespace to be "Ready" ...
	I0926 18:15:37.631533    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-108000
	I0926 18:15:37.631537    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:37.631542    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:37.631550    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:37.633180    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:15:37.633186    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:37.633191    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:37.633195    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:37.633199    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:37.633201    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:37 GMT
	I0926 18:15:37.633204    5496 round_trippers.go:580]     Audit-Id: 80d3d23d-8c8d-4d9e-81ab-6f6b586c7476
	I0926 18:15:37.633206    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:37.633493    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-108000","namespace":"kube-system","uid":"2a5e99f4-416d-4d75-acd2-33231f5f780d","resourceVersion":"1206","creationTimestamp":"2024-09-27T01:08:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.14:2379","kubernetes.io/config.hash":"416a07783603924d51ef5ca1abc7c318","kubernetes.io/config.mirror":"416a07783603924d51ef5ca1abc7c318","kubernetes.io/config.seen":"2024-09-27T01:08:53.027445649Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6888 chars]
	I0926 18:15:37.633723    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:37.633730    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:37.633736    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:37.633739    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:37.634919    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:15:37.634926    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:37.634931    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:37.634935    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:37.634939    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:37.634943    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:37 GMT
	I0926 18:15:37.634946    5496 round_trippers.go:580]     Audit-Id: ca68bf5a-bba4-476e-ad7f-28a326e90032
	I0926 18:15:37.634950    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:37.635121    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1203","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5300 chars]
	I0926 18:15:37.635314    5496 pod_ready.go:98] node "multinode-108000" hosting pod "etcd-multinode-108000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-108000" has status "Ready":"False"
	I0926 18:15:37.635325    5496 pod_ready.go:82] duration metric: took 3.817202ms for pod "etcd-multinode-108000" in "kube-system" namespace to be "Ready" ...
	E0926 18:15:37.635331    5496 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-108000" hosting pod "etcd-multinode-108000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-108000" has status "Ready":"False"
	I0926 18:15:37.635342    5496 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-multinode-108000" in "kube-system" namespace to be "Ready" ...
	I0926 18:15:37.635373    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-108000
	I0926 18:15:37.635378    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:37.635383    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:37.635388    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:37.636642    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:15:37.636649    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:37.636653    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:37.636657    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:37 GMT
	I0926 18:15:37.636661    5496 round_trippers.go:580]     Audit-Id: d622a5b6-1484-4490-943a-0979fe9146ed
	I0926 18:15:37.636667    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:37.636669    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:37.636671    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:37.636814    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-108000","namespace":"kube-system","uid":"b8011715-128c-4dfc-94b7-cc9c04907c8a","resourceVersion":"1209","creationTimestamp":"2024-09-27T01:08:53Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.14:8443","kubernetes.io/config.hash":"3b2e0fdf454135a81bc6cacb88271d66","kubernetes.io/config.mirror":"3b2e0fdf454135a81bc6cacb88271d66","kubernetes.io/config.seen":"2024-09-27T01:08:53.027447712Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 8136 chars]
	I0926 18:15:37.637064    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:37.637071    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:37.637077    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:37.637080    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:37.638194    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:15:37.638202    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:37.638208    5496 round_trippers.go:580]     Audit-Id: c1a3a6a4-2f27-46f2-9e8f-d52bd9ff3bb7
	I0926 18:15:37.638212    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:37.638216    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:37.638219    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:37.638223    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:37.638227    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:37 GMT
	I0926 18:15:37.638336    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1203","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5300 chars]
	I0926 18:15:37.638508    5496 pod_ready.go:98] node "multinode-108000" hosting pod "kube-apiserver-multinode-108000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-108000" has status "Ready":"False"
	I0926 18:15:37.638517    5496 pod_ready.go:82] duration metric: took 3.169848ms for pod "kube-apiserver-multinode-108000" in "kube-system" namespace to be "Ready" ...
	E0926 18:15:37.638522    5496 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-108000" hosting pod "kube-apiserver-multinode-108000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-108000" has status "Ready":"False"
	I0926 18:15:37.638528    5496 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-multinode-108000" in "kube-system" namespace to be "Ready" ...
	I0926 18:15:37.638557    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-108000
	I0926 18:15:37.638562    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:37.638568    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:37.638570    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:37.639598    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:15:37.639605    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:37.639610    5496 round_trippers.go:580]     Audit-Id: 2a845ccb-913b-4e6b-97df-fc34682663e2
	I0926 18:15:37.639614    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:37.639617    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:37.639621    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:37.639625    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:37.639633    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:37 GMT
	I0926 18:15:37.639799    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-108000","namespace":"kube-system","uid":"42fac17d-5eda-41e8-8747-902b605e747f","resourceVersion":"1210","creationTimestamp":"2024-09-27T01:08:53Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fec5fbfbd6a0fb8784a74d22da6a6ca2","kubernetes.io/config.mirror":"fec5fbfbd6a0fb8784a74d22da6a6ca2","kubernetes.io/config.seen":"2024-09-27T01:08:53.027448437Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7727 chars]
	I0926 18:15:37.752679    5496 request.go:632] Waited for 112.618774ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:37.752773    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:37.752782    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:37.752793    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:37.752800    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:37.755216    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:37.755228    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:37.755236    5496 round_trippers.go:580]     Audit-Id: 89f83d82-d21a-481c-8402-15e416c8d851
	I0926 18:15:37.755241    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:37.755245    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:37.755253    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:37.755262    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:37.755267    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:37 GMT
	I0926 18:15:37.755466    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1203","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5300 chars]
	I0926 18:15:37.755727    5496 pod_ready.go:98] node "multinode-108000" hosting pod "kube-controller-manager-multinode-108000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-108000" has status "Ready":"False"
	I0926 18:15:37.755741    5496 pod_ready.go:82] duration metric: took 117.206782ms for pod "kube-controller-manager-multinode-108000" in "kube-system" namespace to be "Ready" ...
	E0926 18:15:37.755750    5496 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-108000" hosting pod "kube-controller-manager-multinode-108000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-108000" has status "Ready":"False"
	I0926 18:15:37.755758    5496 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-9kjdl" in "kube-system" namespace to be "Ready" ...
	I0926 18:15:37.951740    5496 request.go:632] Waited for 195.930354ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9kjdl
	I0926 18:15:37.951848    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9kjdl
	I0926 18:15:37.951860    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:37.951871    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:37.951877    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:37.954934    5496 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 18:15:37.954954    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:37.954962    5496 round_trippers.go:580]     Audit-Id: 49b3a949-470e-45ea-a4c2-b9d8c79e513c
	I0926 18:15:37.954967    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:37.954971    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:37.954974    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:37.954978    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:37.954981    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:38 GMT
	I0926 18:15:37.955148    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9kjdl","generateName":"kube-proxy-","namespace":"kube-system","uid":"979606a2-6bc4-46c0-8333-000bc25722f3","resourceVersion":"1221","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ec618504-e724-4298-bf49-56d9dcae8b25","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec618504-e724-4298-bf49-56d9dcae8b25\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6395 chars]
	I0926 18:15:38.153228    5496 request.go:632] Waited for 197.663213ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:38.153375    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:38.153386    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:38.153397    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:38.153404    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:38.155572    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:38.155584    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:38.155590    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:38.155595    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:38.155598    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:38 GMT
	I0926 18:15:38.155603    5496 round_trippers.go:580]     Audit-Id: 6c454f18-073a-4419-a617-9b93353b93ec
	I0926 18:15:38.155606    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:38.155610    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:38.156089    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1203","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5300 chars]
	I0926 18:15:38.156356    5496 pod_ready.go:98] node "multinode-108000" hosting pod "kube-proxy-9kjdl" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-108000" has status "Ready":"False"
	I0926 18:15:38.156369    5496 pod_ready.go:82] duration metric: took 400.60195ms for pod "kube-proxy-9kjdl" in "kube-system" namespace to be "Ready" ...
	E0926 18:15:38.156377    5496 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-108000" hosting pod "kube-proxy-9kjdl" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-108000" has status "Ready":"False"
	I0926 18:15:38.156390    5496 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-ngs2x" in "kube-system" namespace to be "Ready" ...
	I0926 18:15:38.352472    5496 request.go:632] Waited for 195.963109ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ngs2x
	I0926 18:15:38.352547    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ngs2x
	I0926 18:15:38.352557    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:38.352568    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:38.352575    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:38.354731    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:38.354744    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:38.354757    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:38.354763    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:38.354769    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:38.354776    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:38.354780    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:38 GMT
	I0926 18:15:38.354783    5496 round_trippers.go:580]     Audit-Id: 13b915e8-c5a5-4ba8-a37e-887fcb24c5e8
	I0926 18:15:38.354964    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ngs2x","generateName":"kube-proxy-","namespace":"kube-system","uid":"f95c0316-b4a8-4f0c-a90b-a88af50fbc68","resourceVersion":"1040","creationTimestamp":"2024-09-27T01:09:40Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ec618504-e724-4298-bf49-56d9dcae8b25","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:09:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec618504-e724-4298-bf49-56d9dcae8b25\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6198 chars]
	I0926 18:15:38.551339    5496 request.go:632] Waited for 195.960776ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-108000-m02
	I0926 18:15:38.551415    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000-m02
	I0926 18:15:38.551421    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:38.551427    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:38.551432    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:38.553157    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:15:38.553168    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:38.553173    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:38.553183    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:38.553186    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:38 GMT
	I0926 18:15:38.553189    5496 round_trippers.go:580]     Audit-Id: 353635c7-2e16-43cf-b82a-1460b8b14ef7
	I0926 18:15:38.553192    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:38.553195    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:38.553266    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000-m02","uid":"653db940-78e0-431e-befd-25309d2a6cc8","resourceVersion":"1071","creationTimestamp":"2024-09-27T01:13:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_26T18_13_29_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:13:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3812 chars]
	I0926 18:15:38.553454    5496 pod_ready.go:93] pod "kube-proxy-ngs2x" in "kube-system" namespace has status "Ready":"True"
	I0926 18:15:38.553462    5496 pod_ready.go:82] duration metric: took 397.064629ms for pod "kube-proxy-ngs2x" in "kube-system" namespace to be "Ready" ...
	I0926 18:15:38.553469    5496 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-pwrqj" in "kube-system" namespace to be "Ready" ...
	I0926 18:15:38.751250    5496 request.go:632] Waited for 197.739854ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pwrqj
	I0926 18:15:38.751281    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pwrqj
	I0926 18:15:38.751286    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:38.751292    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:38.751296    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:38.752892    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:15:38.752901    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:38.752907    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:38.752922    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:38.752930    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:38.752932    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:38 GMT
	I0926 18:15:38.752935    5496 round_trippers.go:580]     Audit-Id: 5b3bd9ca-5e5e-4b9c-8224-a8d0e4244a1b
	I0926 18:15:38.752942    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:38.753015    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pwrqj","generateName":"kube-proxy-","namespace":"kube-system","uid":"dfc98f0e-705d-41fd-a871-9d4f8455b11d","resourceVersion":"1158","creationTimestamp":"2024-09-27T01:10:31Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ec618504-e724-4298-bf49-56d9dcae8b25","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:10:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec618504-e724-4298-bf49-56d9dcae8b25\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6198 chars]
	I0926 18:15:38.951829    5496 request.go:632] Waited for 198.542661ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-108000-m03
	I0926 18:15:38.951884    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000-m03
	I0926 18:15:38.951890    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:38.951896    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:38.951899    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:38.953525    5496 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0926 18:15:38.953533    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:38.953538    5496 round_trippers.go:580]     Content-Length: 210
	I0926 18:15:38.953541    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:39 GMT
	I0926 18:15:38.953543    5496 round_trippers.go:580]     Audit-Id: ea2855c8-1c82-4436-8d56-374bdd2e4173
	I0926 18:15:38.953545    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:38.953548    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:38.953550    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:38.953556    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:38.953569    5496 request.go:1351] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-108000-m03\" not found","reason":"NotFound","details":{"name":"multinode-108000-m03","kind":"nodes"},"code":404}
	I0926 18:15:38.953689    5496 pod_ready.go:98] node "multinode-108000-m03" hosting pod "kube-proxy-pwrqj" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-108000-m03": nodes "multinode-108000-m03" not found
	I0926 18:15:38.953698    5496 pod_ready.go:82] duration metric: took 400.222259ms for pod "kube-proxy-pwrqj" in "kube-system" namespace to be "Ready" ...
	E0926 18:15:38.953704    5496 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-108000-m03" hosting pod "kube-proxy-pwrqj" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-108000-m03": nodes "multinode-108000-m03" not found
	I0926 18:15:38.953712    5496 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-multinode-108000" in "kube-system" namespace to be "Ready" ...
	I0926 18:15:39.151391    5496 request.go:632] Waited for 197.640189ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-108000
	I0926 18:15:39.151456    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-108000
	I0926 18:15:39.151463    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:39.151472    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:39.151478    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:39.153561    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:39.153573    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:39.153579    5496 round_trippers.go:580]     Audit-Id: 77b55748-0b02-493c-be9d-e0d00bcb9c4a
	I0926 18:15:39.153583    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:39.153586    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:39.153588    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:39.153592    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:39.153595    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:39 GMT
	I0926 18:15:39.153764    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-108000","namespace":"kube-system","uid":"e5b482e0-154d-4620-8f24-1ebf181b9c1b","resourceVersion":"1207","creationTimestamp":"2024-09-27T01:08:53Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"40cf241c42ae492b94bc92cec52f27f4","kubernetes.io/config.mirror":"40cf241c42ae492b94bc92cec52f27f4","kubernetes.io/config.seen":"2024-09-27T01:08:53.027449029Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5439 chars]
	I0926 18:15:39.351587    5496 request.go:632] Waited for 197.563253ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:39.351681    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:39.351701    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:39.351714    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:39.351722    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:39.356273    5496 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 18:15:39.356285    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:39.356290    5496 round_trippers.go:580]     Audit-Id: cee47110-3992-4d7a-a6ce-e4b45276cf1d
	I0926 18:15:39.356294    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:39.356297    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:39.356299    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:39.356301    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:39.356304    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:39 GMT
	I0926 18:15:39.356418    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:39.356629    5496 pod_ready.go:98] node "multinode-108000" hosting pod "kube-scheduler-multinode-108000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-108000" has status "Ready":"False"
	I0926 18:15:39.356639    5496 pod_ready.go:82] duration metric: took 402.920719ms for pod "kube-scheduler-multinode-108000" in "kube-system" namespace to be "Ready" ...
	E0926 18:15:39.356646    5496 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-108000" hosting pod "kube-scheduler-multinode-108000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-108000" has status "Ready":"False"
	I0926 18:15:39.356652    5496 pod_ready.go:39] duration metric: took 1.733070528s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0926 18:15:39.356664    5496 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0926 18:15:39.367880    5496 command_runner.go:130] > -16
	I0926 18:15:39.367906    5496 ops.go:34] apiserver oom_adj: -16
	I0926 18:15:39.367911    5496 kubeadm.go:597] duration metric: took 10.178588037s to restartPrimaryControlPlane
	I0926 18:15:39.367916    5496 kubeadm.go:394] duration metric: took 10.201506133s to StartCluster
	I0926 18:15:39.367931    5496 settings.go:142] acquiring lock: {Name:mka8948d0f70add5c5f20f2eca7124a97a496c1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 18:15:39.368021    5496 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19711-1128/kubeconfig
	I0926 18:15:39.368410    5496 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1128/kubeconfig: {Name:mkb9c069c578c490d8cb734b05a21821e9215482 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 18:15:39.368773    5496 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 18:15:39.368842    5496 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0926 18:15:39.368928    5496 config.go:182] Loaded profile config "multinode-108000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 18:15:39.411973    5496 out.go:177] * Verifying Kubernetes components...
	I0926 18:15:39.469853    5496 out.go:177] * Enabled addons: 
	I0926 18:15:39.490939    5496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 18:15:39.511963    5496 addons.go:510] duration metric: took 143.130056ms for enable addons: enabled=[]
	I0926 18:15:39.632619    5496 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 18:15:39.645920    5496 node_ready.go:35] waiting up to 6m0s for node "multinode-108000" to be "Ready" ...
	I0926 18:15:39.645984    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:39.645990    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:39.645996    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:39.645999    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:39.647771    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:15:39.647779    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:39.647784    5496 round_trippers.go:580]     Audit-Id: 2af6b540-0cb6-4cb7-9d23-5549f361b2a8
	I0926 18:15:39.647787    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:39.647790    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:39.647792    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:39.647794    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:39.647796    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:39 GMT
	I0926 18:15:39.647973    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:40.147221    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:40.147241    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:40.147250    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:40.147257    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:40.149046    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:15:40.149059    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:40.149068    5496 round_trippers.go:580]     Audit-Id: 6eef010a-be8d-4142-b1ab-c4e8fb8b8a6d
	I0926 18:15:40.149074    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:40.149082    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:40.149090    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:40.149095    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:40.149100    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:40 GMT
	I0926 18:15:40.149314    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:40.647082    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:40.647109    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:40.647121    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:40.647125    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:40.649949    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:40.649963    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:40.649970    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:40.649975    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:40.649979    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:40.649983    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:40 GMT
	I0926 18:15:40.649987    5496 round_trippers.go:580]     Audit-Id: 8142bc98-9616-4eed-8557-f89eb4761b93
	I0926 18:15:40.649990    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:40.650155    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:41.147405    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:41.147428    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:41.147440    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:41.147447    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:41.150144    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:41.150156    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:41.150163    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:41.150167    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:41 GMT
	I0926 18:15:41.150172    5496 round_trippers.go:580]     Audit-Id: f4bc574e-0a0f-4946-ba1b-cec1a8d52514
	I0926 18:15:41.150176    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:41.150180    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:41.150185    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:41.150624    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:41.648210    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:41.648234    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:41.648246    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:41.648252    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:41.651452    5496 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 18:15:41.651468    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:41.651475    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:41 GMT
	I0926 18:15:41.651478    5496 round_trippers.go:580]     Audit-Id: 828de500-a31f-4597-abde-16b7a5328d86
	I0926 18:15:41.651482    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:41.651486    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:41.651514    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:41.651523    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:41.651615    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:41.651871    5496 node_ready.go:53] node "multinode-108000" has status "Ready":"False"
	I0926 18:15:42.147225    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:42.147245    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:42.147256    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:42.147262    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:42.149721    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:42.149735    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:42.149742    5496 round_trippers.go:580]     Audit-Id: 31bfc2e8-f07f-46ca-9047-a6aa9755b1d7
	I0926 18:15:42.149747    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:42.149750    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:42.149754    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:42.149757    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:42.149760    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:42 GMT
	I0926 18:15:42.149931    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:42.648266    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:42.648290    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:42.648302    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:42.648310    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:42.651671    5496 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 18:15:42.651688    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:42.651695    5496 round_trippers.go:580]     Audit-Id: 9aac7977-8719-4334-b9ac-7cc704bfbe28
	I0926 18:15:42.651699    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:42.651710    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:42.651717    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:42.651720    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:42.651723    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:42 GMT
	I0926 18:15:42.652089    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:43.146122    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:43.146151    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:43.146208    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:43.146215    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:43.148579    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:43.148595    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:43.148601    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:43.148606    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:43 GMT
	I0926 18:15:43.148609    5496 round_trippers.go:580]     Audit-Id: ab475b3c-fd55-4a73-8937-b0e4cf8651e7
	I0926 18:15:43.148613    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:43.148616    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:43.148620    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:43.148731    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:43.646565    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:43.646593    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:43.646606    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:43.646628    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:43.649347    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:43.649371    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:43.649382    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:43.649390    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:43.649397    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:43.649404    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:43 GMT
	I0926 18:15:43.649412    5496 round_trippers.go:580]     Audit-Id: a3242efd-e344-4614-ac4e-da66db94e4ac
	I0926 18:15:43.649418    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:43.649563    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:44.146285    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:44.146309    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:44.146326    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:44.146333    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:44.149072    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:44.149087    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:44.149094    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:44.149098    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:44.149101    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:44.149104    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:44 GMT
	I0926 18:15:44.149108    5496 round_trippers.go:580]     Audit-Id: c78d9ad7-b266-4e97-bc18-d776ed1ec708
	I0926 18:15:44.149110    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:44.149347    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:44.149615    5496 node_ready.go:53] node "multinode-108000" has status "Ready":"False"
	I0926 18:15:44.646839    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:44.646914    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:44.646944    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:44.646951    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:44.649331    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:44.649343    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:44.649349    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:44.649380    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:44.649384    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:44.649387    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:44.649390    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:44 GMT
	I0926 18:15:44.649393    5496 round_trippers.go:580]     Audit-Id: f16df67f-5c78-4531-95cc-bddd6e410c30
	I0926 18:15:44.649499    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:45.146768    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:45.146791    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:45.146803    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:45.146811    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:45.149481    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:45.149497    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:45.149504    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:45.149509    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:45.149516    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:45.149520    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:45 GMT
	I0926 18:15:45.149524    5496 round_trippers.go:580]     Audit-Id: 294c5f7c-95aa-44c0-b8ab-ffb11bd65994
	I0926 18:15:45.149528    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:45.149826    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:45.647608    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:45.647635    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:45.647647    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:45.647653    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:45.650647    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:45.650662    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:45.650669    5496 round_trippers.go:580]     Audit-Id: e1c797e2-a13a-48fb-a9e7-6b8c5114275e
	I0926 18:15:45.650673    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:45.650676    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:45.650680    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:45.650683    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:45.650686    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:45 GMT
	I0926 18:15:45.650750    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:46.147387    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:46.147416    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:46.147428    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:46.147434    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:46.150179    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:46.150194    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:46.150200    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:46.150205    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:46 GMT
	I0926 18:15:46.150208    5496 round_trippers.go:580]     Audit-Id: 74a0db55-27bf-4445-9980-c9e959b41522
	I0926 18:15:46.150212    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:46.150215    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:46.150219    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:46.150456    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:46.150715    5496 node_ready.go:53] node "multinode-108000" has status "Ready":"False"
	I0926 18:15:46.648183    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:46.648246    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:46.648259    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:46.648266    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:46.650780    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:46.650795    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:46.650801    5496 round_trippers.go:580]     Audit-Id: 3d3553d4-ea47-4dbd-9bc7-9ca44dfc10bb
	I0926 18:15:46.650804    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:46.650807    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:46.650811    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:46.650813    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:46.650817    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:46 GMT
	I0926 18:15:46.650911    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:47.147243    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:47.147269    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:47.147281    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:47.147286    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:47.150053    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:47.150065    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:47.150107    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:47.150122    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:47.150126    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:47 GMT
	I0926 18:15:47.150133    5496 round_trippers.go:580]     Audit-Id: a51c9902-f023-498d-8e55-8a8baf3a507e
	I0926 18:15:47.150137    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:47.150142    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:47.150338    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:47.647019    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:47.647041    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:47.647053    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:47.647060    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:47.649853    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:47.649870    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:47.649881    5496 round_trippers.go:580]     Audit-Id: d61c241f-93a8-48df-ae52-24d212411a49
	I0926 18:15:47.649888    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:47.649893    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:47.649897    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:47.649901    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:47.649905    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:47 GMT
	I0926 18:15:47.649974    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:48.146586    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:48.146600    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:48.146607    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:48.146609    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:48.148507    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:15:48.148519    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:48.148524    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:48.148527    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:48.148529    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:48.148532    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:48 GMT
	I0926 18:15:48.148535    5496 round_trippers.go:580]     Audit-Id: 15e3c12e-ba63-4c5e-9b0a-d314bdefe032
	I0926 18:15:48.148537    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:48.149117    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:48.646462    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:48.646501    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:48.646509    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:48.646514    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:48.648875    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:48.648887    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:48.648892    5496 round_trippers.go:580]     Audit-Id: 2befc4ab-814c-4256-8da7-1050a0cca48d
	I0926 18:15:48.648895    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:48.648898    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:48.648900    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:48.648903    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:48.648905    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:48 GMT
	I0926 18:15:48.649208    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:48.649414    5496 node_ready.go:53] node "multinode-108000" has status "Ready":"False"
	I0926 18:15:49.147028    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:49.147047    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:49.147058    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:49.147064    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:49.149308    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:49.149320    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:49.149327    5496 round_trippers.go:580]     Audit-Id: 15cc7eff-6de1-4a45-8be6-a4f58b18f504
	I0926 18:15:49.149334    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:49.149341    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:49.149348    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:49.149355    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:49.149360    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:49 GMT
	I0926 18:15:49.149557    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:49.647633    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:49.647654    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:49.647666    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:49.647670    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:49.650288    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:49.650304    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:49.650312    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:49.650324    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:49.650330    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:49.650333    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:49.650338    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:49 GMT
	I0926 18:15:49.650342    5496 round_trippers.go:580]     Audit-Id: 35bb502e-67f0-4d85-9636-31fc930b739e
	I0926 18:15:49.650451    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:50.147292    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:50.147329    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:50.147337    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:50.147342    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:50.149517    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:50.149531    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:50.149539    5496 round_trippers.go:580]     Audit-Id: 835635e1-2c69-4eb5-b337-1d8a755e0397
	I0926 18:15:50.149544    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:50.149548    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:50.149552    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:50.149557    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:50.149561    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:50 GMT
	I0926 18:15:50.149647    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:50.646117    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:50.646172    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:50.646188    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:50.646197    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:50.648096    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:15:50.648111    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:50.648117    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:50.648122    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:50.648125    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:50.648127    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:50.648131    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:50 GMT
	I0926 18:15:50.648134    5496 round_trippers.go:580]     Audit-Id: 52a44732-48d9-42b4-ad51-ae93b9e48478
	I0926 18:15:50.648295    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:51.147605    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:51.147631    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:51.147643    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:51.147652    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:51.150683    5496 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 18:15:51.150702    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:51.150713    5496 round_trippers.go:580]     Audit-Id: e1e22eb3-0afb-4cc7-9274-59cdda58db39
	I0926 18:15:51.150719    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:51.150726    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:51.150731    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:51.150736    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:51.150741    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:51 GMT
	I0926 18:15:51.150914    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:51.151178    5496 node_ready.go:53] node "multinode-108000" has status "Ready":"False"
	I0926 18:15:51.646326    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:51.646431    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:51.646447    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:51.646453    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:51.649131    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:51.649146    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:51.649153    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:51 GMT
	I0926 18:15:51.649158    5496 round_trippers.go:580]     Audit-Id: 9cb91ab4-2d11-4532-a40b-cc1b5ee81802
	I0926 18:15:51.649161    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:51.649164    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:51.649168    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:51.649174    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:51.649593    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:52.148217    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:52.148251    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:52.148262    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:52.148268    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:52.150889    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:52.150904    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:52.150914    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:52.150919    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:52.150923    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:52.150929    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:52 GMT
	I0926 18:15:52.150933    5496 round_trippers.go:580]     Audit-Id: 6406a369-6a5f-4b79-b384-f8195f179467
	I0926 18:15:52.150939    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:52.151107    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:52.646242    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:52.646270    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:52.646282    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:52.646298    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:52.648904    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:52.648917    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:52.648924    5496 round_trippers.go:580]     Audit-Id: 8d595130-e0e8-4a9f-8325-ecd9c038b2ec
	I0926 18:15:52.648928    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:52.648933    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:52.648937    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:52.648941    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:52.648945    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:52 GMT
	I0926 18:15:52.649050    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:53.202365    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:53.202394    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:53.202406    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:53.202413    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:53.205265    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:53.205284    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:53.205291    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:53.205296    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:53.205299    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:53.205302    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:53.205305    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:53 GMT
	I0926 18:15:53.205309    5496 round_trippers.go:580]     Audit-Id: 3c1e570b-fb26-49b4-acd4-de9d6accf47d
	I0926 18:15:53.205527    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:53.700967    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:53.701022    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:53.701037    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:53.701043    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:53.703812    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:53.703827    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:53.703834    5496 round_trippers.go:580]     Audit-Id: da999e41-17c5-45fe-826f-98d56efdbc9d
	I0926 18:15:53.703839    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:53.703842    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:53.703845    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:53.703850    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:53.703854    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:53 GMT
	I0926 18:15:53.703954    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:53.704211    5496 node_ready.go:53] node "multinode-108000" has status "Ready":"False"
	I0926 18:15:54.202328    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:54.202355    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:54.202367    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:54.202375    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:54.205451    5496 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 18:15:54.205480    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:54.205506    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:54 GMT
	I0926 18:15:54.205512    5496 round_trippers.go:580]     Audit-Id: 25373c7c-92f8-4dda-b00b-5c0e1b198873
	I0926 18:15:54.205516    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:54.205520    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:54.205525    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:54.205530    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:54.205621    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:54.701377    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:54.701401    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:54.701410    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:54.701416    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:54.703608    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:54.703620    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:54.703624    5496 round_trippers.go:580]     Audit-Id: e7e9133c-3f51-4fbf-823b-1bfe8e1cee58
	I0926 18:15:54.703628    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:54.703630    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:54.703632    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:54.703636    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:54.703638    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:54 GMT
	I0926 18:15:54.703967    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:55.201675    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:55.201703    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:55.201743    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:55.201753    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:55.204412    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:55.204427    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:55.204434    5496 round_trippers.go:580]     Audit-Id: 5c53dcba-4548-48bd-acb1-dfed0a263959
	I0926 18:15:55.204440    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:55.204446    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:55.204449    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:55.204453    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:55.204457    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:55 GMT
	I0926 18:15:55.204593    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:55.701211    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:55.701249    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:55.701271    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:55.701294    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:55.703603    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:55.703611    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:55.703616    5496 round_trippers.go:580]     Audit-Id: d5c9b478-c027-41c6-ad2a-3c9bcc0d6c32
	I0926 18:15:55.703620    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:55.703624    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:55.703627    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:55.703632    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:55.703637    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:55 GMT
	I0926 18:15:55.703858    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:56.203084    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:56.203106    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:56.203118    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:56.203124    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:56.205928    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:56.205942    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:56.205949    5496 round_trippers.go:580]     Audit-Id: 122d71e6-ac12-4098-9ded-b9c3a04efc33
	I0926 18:15:56.205954    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:56.205980    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:56.205988    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:56.205992    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:56.205997    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:56 GMT
	I0926 18:15:56.206116    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1348","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5293 chars]
	I0926 18:15:56.206370    5496 node_ready.go:49] node "multinode-108000" has status "Ready":"True"
	I0926 18:15:56.206386    5496 node_ready.go:38] duration metric: took 16.505593065s for node "multinode-108000" to be "Ready" ...
	I0926 18:15:56.206394    5496 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0926 18:15:56.206440    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods
	I0926 18:15:56.206448    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:56.206456    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:56.206461    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:56.208603    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:56.208616    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:56.208624    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:56.208630    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:56.208633    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:56 GMT
	I0926 18:15:56.208635    5496 round_trippers.go:580]     Audit-Id: 61da8b37-0a42-4435-b609-7377a91b7d3e
	I0926 18:15:56.208639    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:56.208642    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:56.209824    5496 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1349"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 88976 chars]
	I0926 18:15:56.211733    5496 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hxdhm" in "kube-system" namespace to be "Ready" ...
	I0926 18:15:56.211771    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:15:56.211777    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:56.211783    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:56.211787    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:56.212766    5496 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0926 18:15:56.212775    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:56.212781    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:56.212785    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:56.212791    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:56.212795    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:56.212797    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:56 GMT
	I0926 18:15:56.212800    5496 round_trippers.go:580]     Audit-Id: 8e5a3db4-8372-4799-9a3d-207543401e6a
	I0926 18:15:56.212967    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:15:56.213209    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:56.213217    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:56.213223    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:56.213227    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:56.214083    5496 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0926 18:15:56.214092    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:56.214099    5496 round_trippers.go:580]     Audit-Id: 2d49d4ec-0c96-4aef-b854-171e10728aab
	I0926 18:15:56.214104    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:56.214108    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:56.214113    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:56.214117    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:56.214120    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:56 GMT
	I0926 18:15:56.214282    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1348","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5293 chars]
	I0926 18:15:56.714012    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:15:56.714038    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:56.714050    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:56.714057    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:56.716847    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:56.716860    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:56.716867    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:56 GMT
	I0926 18:15:56.716871    5496 round_trippers.go:580]     Audit-Id: 672c41eb-7827-4345-ab5f-c4034692866c
	I0926 18:15:56.716898    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:56.716904    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:56.716911    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:56.716917    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:56.717040    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:15:56.717409    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:56.717419    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:56.717427    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:56.717432    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:56.718807    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:15:56.718815    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:56.718822    5496 round_trippers.go:580]     Audit-Id: 1eba1797-7604-48e5-b4ce-809a8efa23bf
	I0926 18:15:56.718826    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:56.718831    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:56.718835    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:56.718838    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:56.718841    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:56 GMT
	I0926 18:15:56.719060    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1348","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5293 chars]
	I0926 18:15:57.212304    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:15:57.212327    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:57.212339    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:57.212346    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:57.215305    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:57.215319    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:57.215326    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:57 GMT
	I0926 18:15:57.215331    5496 round_trippers.go:580]     Audit-Id: d2a664ee-7641-4337-9bf7-0aeb4ca9f53a
	I0926 18:15:57.215335    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:57.215339    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:57.215342    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:57.215346    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:57.215467    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:15:57.215841    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:57.215850    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:57.215858    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:57.215863    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:57.217440    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:15:57.217449    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:57.217454    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:57.217457    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:57 GMT
	I0926 18:15:57.217461    5496 round_trippers.go:580]     Audit-Id: ea932f85-7c75-4b35-b73b-465e2c509b9d
	I0926 18:15:57.217463    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:57.217468    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:57.217472    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:57.217802    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1348","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5293 chars]
	I0926 18:15:57.712215    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:15:57.712237    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:57.712249    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:57.712253    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:57.714882    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:57.714896    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:57.714904    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:57 GMT
	I0926 18:15:57.714908    5496 round_trippers.go:580]     Audit-Id: 16a30886-7e5b-42b3-a174-2be2e25ac06c
	I0926 18:15:57.714912    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:57.714915    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:57.714964    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:57.714972    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:57.715080    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:15:57.715455    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:57.715464    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:57.715472    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:57.715477    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:57.716752    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:15:57.716760    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:57.716764    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:57.716768    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:57.716770    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:57.716774    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:57 GMT
	I0926 18:15:57.716776    5496 round_trippers.go:580]     Audit-Id: 7d16a94f-4d2f-410f-9f26-8680795887e3
	I0926 18:15:57.716781    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:57.717030    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1348","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5293 chars]
	I0926 18:15:58.212412    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:15:58.212433    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:58.212444    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:58.212450    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:58.214715    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:58.214728    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:58.214735    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:58.214738    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:58.214742    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:58.214746    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:58.214750    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:58 GMT
	I0926 18:15:58.214753    5496 round_trippers.go:580]     Audit-Id: 0f2b31c3-e408-46e2-9415-a4ff46f2dfac
	I0926 18:15:58.214827    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:15:58.215220    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:58.215229    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:58.215237    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:58.215241    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:58.216413    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:15:58.216421    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:58.216426    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:58.216429    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:58 GMT
	I0926 18:15:58.216432    5496 round_trippers.go:580]     Audit-Id: b4d539a1-dacc-4795-97bf-47cfcba4ec3c
	I0926 18:15:58.216435    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:58.216438    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:58.216440    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:58.216514    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1348","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5293 chars]
	I0926 18:15:58.216686    5496 pod_ready.go:103] pod "coredns-7c65d6cfc9-hxdhm" in "kube-system" namespace has status "Ready":"False"
	I0926 18:15:58.714077    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:15:58.714098    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:58.714110    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:58.714116    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:58.717289    5496 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 18:15:58.717309    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:58.717317    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:58.717322    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:58.717327    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:58 GMT
	I0926 18:15:58.717332    5496 round_trippers.go:580]     Audit-Id: 33e548d3-71b4-41b2-a1d5-adb470881de3
	I0926 18:15:58.717336    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:58.717350    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:58.717486    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:15:58.717877    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:58.717887    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:58.717895    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:58.717899    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:58.719406    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:15:58.719417    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:58.719424    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:58.719430    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:58.719436    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:58.719441    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:58 GMT
	I0926 18:15:58.719448    5496 round_trippers.go:580]     Audit-Id: df472da9-1c0c-4b68-a7c0-acf36a5fa7a6
	I0926 18:15:58.719452    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:58.719566    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1348","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5293 chars]
	I0926 18:15:59.213590    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:15:59.213613    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:59.213624    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:59.213630    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:59.215847    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:59.215863    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:59.215874    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:59 GMT
	I0926 18:15:59.215881    5496 round_trippers.go:580]     Audit-Id: 6cb8f815-7424-49a4-b9f7-da498411df5e
	I0926 18:15:59.215887    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:59.215894    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:59.215898    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:59.215902    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:59.216172    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:15:59.216558    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:59.216567    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:59.216575    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:59.216578    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:59.217991    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:15:59.217999    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:59.218003    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:59 GMT
	I0926 18:15:59.218005    5496 round_trippers.go:580]     Audit-Id: 7bb7792c-1170-4d30-a62c-11150cd8ad70
	I0926 18:15:59.218007    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:59.218009    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:59.218012    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:59.218016    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:59.218152    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:15:59.712302    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:15:59.712318    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:59.712326    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:59.712330    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:59.714530    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:59.714543    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:59.714549    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:59.714552    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:59.714555    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:59.714558    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:59.714560    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:59 GMT
	I0926 18:15:59.714564    5496 round_trippers.go:580]     Audit-Id: c8d73b00-d622-4d32-9315-9399cbe25354
	I0926 18:15:59.714651    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:15:59.714945    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:59.714952    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:59.714958    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:59.714961    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:59.716160    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:15:59.716171    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:59.716178    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:59.716185    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:59.716189    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:59 GMT
	I0926 18:15:59.716201    5496 round_trippers.go:580]     Audit-Id: d62e282e-5117-45a9-a633-026e25ffb4f7
	I0926 18:15:59.716206    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:59.716209    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:59.716316    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:16:00.212726    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:16:00.212747    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:00.212759    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:00.212766    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:00.215319    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:16:00.215332    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:00.215339    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:00 GMT
	I0926 18:16:00.215342    5496 round_trippers.go:580]     Audit-Id: 45bd5cc8-73e5-46c5-89ec-b0de7dc656dc
	I0926 18:16:00.215347    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:00.215352    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:00.215357    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:00.215361    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:00.215759    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:16:00.216046    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:16:00.216053    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:00.216059    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:00.216063    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:00.217222    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:00.217233    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:00.217240    5496 round_trippers.go:580]     Audit-Id: 545490b6-b247-40aa-8eb9-a3adb2b53791
	I0926 18:16:00.217246    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:00.217253    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:00.217256    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:00.217261    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:00.217264    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:00 GMT
	I0926 18:16:00.217457    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:16:00.217623    5496 pod_ready.go:103] pod "coredns-7c65d6cfc9-hxdhm" in "kube-system" namespace has status "Ready":"False"
	I0926 18:16:00.712324    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:16:00.712344    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:00.712355    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:00.712362    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:00.714967    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:16:00.714979    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:00.714986    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:00.714990    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:00.714993    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:00.715014    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:00 GMT
	I0926 18:16:00.715021    5496 round_trippers.go:580]     Audit-Id: ceaa57e5-97c0-4be5-9107-34df439ba6b9
	I0926 18:16:00.715025    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:00.715270    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:16:00.715655    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:16:00.715664    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:00.715672    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:00.715681    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:00.716935    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:00.716943    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:00.716948    5496 round_trippers.go:580]     Audit-Id: 9c219c44-9cd2-4dcc-a080-9754ce4c68c0
	I0926 18:16:00.716953    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:00.716957    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:00.716961    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:00.716966    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:00.716970    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:00 GMT
	I0926 18:16:00.717123    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:16:01.213999    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:16:01.214022    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:01.214032    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:01.214038    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:01.216784    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:16:01.216798    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:01.216805    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:01.216810    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:01.216814    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:01 GMT
	I0926 18:16:01.216818    5496 round_trippers.go:580]     Audit-Id: 1c32c3ec-34d9-4329-a95c-7a623e33a5e3
	I0926 18:16:01.216821    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:01.216825    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:01.216938    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:16:01.217324    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:16:01.217334    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:01.217342    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:01.217349    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:01.218688    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:01.218693    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:01.218698    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:01.218701    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:01 GMT
	I0926 18:16:01.218703    5496 round_trippers.go:580]     Audit-Id: 2b8d27b0-53cb-483a-b5e8-2427a38a3ea6
	I0926 18:16:01.218706    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:01.218713    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:01.218716    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:01.218891    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:16:01.712366    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:16:01.712386    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:01.712399    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:01.712407    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:01.715539    5496 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 18:16:01.715557    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:01.715567    5496 round_trippers.go:580]     Audit-Id: 4fcca3fb-432d-4dd9-bc80-86573dcfd1e2
	I0926 18:16:01.715573    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:01.715579    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:01.715599    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:01.715611    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:01.715620    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:01 GMT
	I0926 18:16:01.715820    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:16:01.716111    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:16:01.716117    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:01.716123    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:01.716126    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:01.717445    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:01.717453    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:01.717460    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:01.717463    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:01.717466    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:01.717469    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:01.717472    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:01 GMT
	I0926 18:16:01.717475    5496 round_trippers.go:580]     Audit-Id: 640fe6c8-1072-4c29-b5d8-0e4fe03e8745
	I0926 18:16:01.717532    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:16:02.212614    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:16:02.212644    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:02.212651    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:02.212655    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:02.214330    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:02.214343    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:02.214350    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:02.214359    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:02 GMT
	I0926 18:16:02.214363    5496 round_trippers.go:580]     Audit-Id: 70ba3b1f-8ca0-4854-96ff-a08f2bc197be
	I0926 18:16:02.214366    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:02.214368    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:02.214370    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:02.214423    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:16:02.214716    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:16:02.214723    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:02.214729    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:02.214733    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:02.216107    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:02.216116    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:02.216121    5496 round_trippers.go:580]     Audit-Id: a6b51239-30e1-4d31-b12c-c65183b73325
	I0926 18:16:02.216124    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:02.216128    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:02.216131    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:02.216134    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:02.216136    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:02 GMT
	I0926 18:16:02.216379    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:16:02.712239    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:16:02.712267    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:02.712279    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:02.712286    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:02.714720    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:16:02.714734    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:02.714742    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:02.714746    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:02.714750    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:02.714753    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:02 GMT
	I0926 18:16:02.714756    5496 round_trippers.go:580]     Audit-Id: 9596591f-7f0b-4129-9236-d5093a1455af
	I0926 18:16:02.714760    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:02.715017    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:16:02.715391    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:16:02.715407    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:02.715415    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:02.715419    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:02.717053    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:02.717060    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:02.717066    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:02.717069    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:02.717088    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:02 GMT
	I0926 18:16:02.717091    5496 round_trippers.go:580]     Audit-Id: ba065de0-a695-46d7-a843-1f2af8257246
	I0926 18:16:02.717094    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:02.717097    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:02.717301    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:16:02.717470    5496 pod_ready.go:103] pod "coredns-7c65d6cfc9-hxdhm" in "kube-system" namespace has status "Ready":"False"
	I0926 18:16:03.212525    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:16:03.212541    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:03.212550    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:03.212555    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:03.214720    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:16:03.214732    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:03.214738    5496 round_trippers.go:580]     Audit-Id: a79b4b2f-5f5c-493b-93c9-ec1ff1cdb6d6
	I0926 18:16:03.214741    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:03.214758    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:03.214760    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:03.214764    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:03.214766    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:03 GMT
	I0926 18:16:03.214817    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:16:03.215162    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:16:03.215169    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:03.215174    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:03.215178    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:03.216455    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:03.216464    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:03.216469    5496 round_trippers.go:580]     Audit-Id: 1c3690bb-9a1a-4c5d-b47d-8a23141028a8
	I0926 18:16:03.216490    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:03.216494    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:03.216496    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:03.216499    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:03.216501    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:03 GMT
	I0926 18:16:03.216563    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:16:03.712497    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:16:03.712520    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:03.712548    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:03.712561    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:03.714758    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:16:03.714768    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:03.714773    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:03.714776    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:03.714778    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:03.714781    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:03.714784    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:03 GMT
	I0926 18:16:03.714786    5496 round_trippers.go:580]     Audit-Id: b20377b1-152f-4b0a-97fa-33cb3f196e68
	I0926 18:16:03.714846    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:16:03.715145    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:16:03.715152    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:03.715157    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:03.715160    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:03.716631    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:03.716641    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:03.716647    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:03 GMT
	I0926 18:16:03.716654    5496 round_trippers.go:580]     Audit-Id: 2d77e054-e393-47e1-b6c0-a85a653e5fa8
	I0926 18:16:03.716658    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:03.716660    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:03.716662    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:03.716666    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:03.716913    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:16:04.213711    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:16:04.213738    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:04.213749    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:04.213756    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:04.216728    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:16:04.216743    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:04.216750    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:04 GMT
	I0926 18:16:04.216754    5496 round_trippers.go:580]     Audit-Id: 3856faf2-d665-4ba5-814f-a001bd910e14
	I0926 18:16:04.216758    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:04.216763    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:04.216766    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:04.216769    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:04.216851    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:16:04.217224    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:16:04.217233    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:04.217240    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:04.217248    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:04.218633    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:04.218643    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:04.218648    5496 round_trippers.go:580]     Audit-Id: 3f1bb779-cd1d-4b2b-bf1b-44cef6ce1444
	I0926 18:16:04.218650    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:04.218653    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:04.218656    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:04.218659    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:04.218661    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:04 GMT
	I0926 18:16:04.218780    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:16:04.712790    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:16:04.712847    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:04.712874    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:04.712882    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:04.715348    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:16:04.715376    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:04.715392    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:04 GMT
	I0926 18:16:04.715403    5496 round_trippers.go:580]     Audit-Id: 381870e6-411f-4bc7-a5b2-06f3ab0df741
	I0926 18:16:04.715415    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:04.715422    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:04.715427    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:04.715432    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:04.715603    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:16:04.715927    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:16:04.715933    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:04.715938    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:04.715945    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:04.717434    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:04.717443    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:04.717447    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:04.717450    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:04.717453    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:04.717455    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:04.717458    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:04 GMT
	I0926 18:16:04.717463    5496 round_trippers.go:580]     Audit-Id: 13275a57-33fb-4475-8904-a0cf82b08de6
	I0926 18:16:04.717520    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:16:04.717697    5496 pod_ready.go:103] pod "coredns-7c65d6cfc9-hxdhm" in "kube-system" namespace has status "Ready":"False"
	I0926 18:16:05.212553    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:16:05.212573    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:05.212585    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:05.212592    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:05.214968    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:16:05.214980    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:05.214986    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:05.214989    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:05.214992    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:05.214995    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:05.214997    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:05 GMT
	I0926 18:16:05.214999    5496 round_trippers.go:580]     Audit-Id: 471fd1c1-7c8b-481e-b842-7bd63ef96a20
	I0926 18:16:05.215092    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:16:05.215402    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:16:05.215409    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:05.215415    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:05.215419    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:05.216818    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:05.216826    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:05.216833    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:05.216864    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:05.216871    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:05 GMT
	I0926 18:16:05.216882    5496 round_trippers.go:580]     Audit-Id: 69f287ee-6cd6-4907-9920-6771d90d68cf
	I0926 18:16:05.216886    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:05.216890    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:05.217004    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:16:05.713284    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:16:05.713313    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:05.713325    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:05.713330    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:05.716123    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:16:05.716138    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:05.716145    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:05.716150    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:05.716153    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:05 GMT
	I0926 18:16:05.716156    5496 round_trippers.go:580]     Audit-Id: c236a70d-dca2-43ed-88fc-5879c4da6276
	I0926 18:16:05.716159    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:05.716162    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:05.716608    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:16:05.716988    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:16:05.716998    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:05.717005    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:05.717009    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:05.718413    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:05.718421    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:05.718427    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:05.718432    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:05 GMT
	I0926 18:16:05.718436    5496 round_trippers.go:580]     Audit-Id: e2bebbe0-f138-4103-9234-96d5b4493142
	I0926 18:16:05.718440    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:05.718445    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:05.718448    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:05.718652    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:16:06.212124    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:16:06.212144    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:06.212152    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:06.212160    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:06.214150    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:06.214163    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:06.214171    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:06.214187    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:06.214190    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:06 GMT
	I0926 18:16:06.214193    5496 round_trippers.go:580]     Audit-Id: 4c81e4b1-7637-4522-88a5-35994488ee60
	I0926 18:16:06.214195    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:06.214198    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:06.214456    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:16:06.214738    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:16:06.214745    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:06.214751    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:06.214755    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:06.215721    5496 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0926 18:16:06.215729    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:06.215734    5496 round_trippers.go:580]     Audit-Id: 6793a338-0f0b-40ac-808c-7c730fdaa921
	I0926 18:16:06.215737    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:06.215740    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:06.215742    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:06.215745    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:06.215752    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:06 GMT
	I0926 18:16:06.215916    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:16:06.712574    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:16:06.712599    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:06.712611    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:06.712618    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:06.715372    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:16:06.715388    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:06.715394    5496 round_trippers.go:580]     Audit-Id: 964daf26-8394-4eab-82c8-79eaacdbb111
	I0926 18:16:06.715397    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:06.715402    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:06.715406    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:06.715410    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:06.715419    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:06 GMT
	I0926 18:16:06.715499    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:16:06.715868    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:16:06.715878    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:06.715885    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:06.715892    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:06.717615    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:06.717622    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:06.717628    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:06.717630    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:06.717633    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:06.717636    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:06 GMT
	I0926 18:16:06.717639    5496 round_trippers.go:580]     Audit-Id: ccb2dc73-d30a-4c73-87fa-54f1870594a0
	I0926 18:16:06.717642    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:06.717718    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:16:06.717892    5496 pod_ready.go:103] pod "coredns-7c65d6cfc9-hxdhm" in "kube-system" namespace has status "Ready":"False"
	I0926 18:16:07.212618    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:16:07.212639    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:07.212651    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:07.212657    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:07.215738    5496 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 18:16:07.215752    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:07.215760    5496 round_trippers.go:580]     Audit-Id: 8caa92ad-c5fe-4849-92b3-734aa6eb01e5
	I0926 18:16:07.215764    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:07.215767    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:07.215771    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:07.215791    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:07.215794    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:07 GMT
	I0926 18:16:07.216121    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:16:07.216498    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:16:07.216509    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:07.216516    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:07.216527    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:07.217819    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:07.217827    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:07.217832    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:07.217835    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:07.217849    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:07.217857    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:07.217861    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:07 GMT
	I0926 18:16:07.217865    5496 round_trippers.go:580]     Audit-Id: d1de1a1f-5441-4b91-9eb9-edf61f425c09
	I0926 18:16:07.217973    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:16:07.712067    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:16:07.712083    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:07.712090    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:07.712094    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:07.714031    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:07.714042    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:07.714050    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:07.714056    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:07.714062    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:07 GMT
	I0926 18:16:07.714065    5496 round_trippers.go:580]     Audit-Id: 74fd7ec4-593c-47d4-add6-3b619a16e4ea
	I0926 18:16:07.714069    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:07.714073    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:07.714333    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:16:07.714638    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:16:07.714645    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:07.714650    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:07.714654    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:07.716665    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:16:07.716675    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:07.716681    5496 round_trippers.go:580]     Audit-Id: c0be86c7-e109-45fa-acbb-fc3af4ede8f7
	I0926 18:16:07.716685    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:07.716690    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:07.716693    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:07.716696    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:07.716699    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:07 GMT
	I0926 18:16:07.716889    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:16:08.212879    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:16:08.212895    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:08.212903    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:08.212915    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:08.215000    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:16:08.215010    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:08.215015    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:08.215019    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:08 GMT
	I0926 18:16:08.215023    5496 round_trippers.go:580]     Audit-Id: 45b134b7-7b5e-4989-947a-6d2e367bc761
	I0926 18:16:08.215027    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:08.215029    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:08.215032    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:08.215271    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:16:08.215565    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:16:08.215572    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:08.215578    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:08.215581    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:08.216758    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:08.216765    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:08.216769    5496 round_trippers.go:580]     Audit-Id: 926a702c-fc90-4baf-93f6-eff541526f7c
	I0926 18:16:08.216772    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:08.216775    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:08.216777    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:08.216779    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:08.216785    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:08 GMT
	I0926 18:16:08.216902    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:16:08.712112    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:16:08.712134    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:08.712143    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:08.712147    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:08.714246    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:16:08.714258    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:08.714266    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:08.714270    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:08.714272    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:08.714274    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:08 GMT
	I0926 18:16:08.714305    5496 round_trippers.go:580]     Audit-Id: e431404e-082c-4bff-af26-2800dd810ef0
	I0926 18:16:08.714312    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:08.714686    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1374","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7040 chars]
	I0926 18:16:08.715136    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:16:08.715143    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:08.715163    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:08.715166    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:08.716625    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:08.716633    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:08.716637    5496 round_trippers.go:580]     Audit-Id: 82b0e256-e8f3-4136-b015-289d803762d8
	I0926 18:16:08.716640    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:08.716643    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:08.716646    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:08.716649    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:08.716652    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:08 GMT
	I0926 18:16:08.716724    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:16:08.716964    5496 pod_ready.go:93] pod "coredns-7c65d6cfc9-hxdhm" in "kube-system" namespace has status "Ready":"True"
	I0926 18:16:08.716986    5496 pod_ready.go:82] duration metric: took 12.505026877s for pod "coredns-7c65d6cfc9-hxdhm" in "kube-system" namespace to be "Ready" ...
	I0926 18:16:08.716993    5496 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-108000" in "kube-system" namespace to be "Ready" ...
	I0926 18:16:08.717042    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-108000
	I0926 18:16:08.717048    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:08.717053    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:08.717057    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:08.718219    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:08.718226    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:08.718231    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:08.718234    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:08.718251    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:08.718259    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:08 GMT
	I0926 18:16:08.718263    5496 round_trippers.go:580]     Audit-Id: 6d5f2ed6-1617-40cb-bf5c-402f0d5297ac
	I0926 18:16:08.718271    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:08.718394    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-108000","namespace":"kube-system","uid":"2a5e99f4-416d-4d75-acd2-33231f5f780d","resourceVersion":"1339","creationTimestamp":"2024-09-27T01:08:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.14:2379","kubernetes.io/config.hash":"416a07783603924d51ef5ca1abc7c318","kubernetes.io/config.mirror":"416a07783603924d51ef5ca1abc7c318","kubernetes.io/config.seen":"2024-09-27T01:08:53.027445649Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6664 chars]
	I0926 18:16:08.718621    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:16:08.718632    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:08.718639    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:08.718641    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:08.719690    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:08.719696    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:08.719701    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:08.719705    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:08.719708    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:08 GMT
	I0926 18:16:08.719711    5496 round_trippers.go:580]     Audit-Id: 43ab5724-617c-4514-a5bb-167d63692c64
	I0926 18:16:08.719713    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:08.719716    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:08.719869    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:16:08.720037    5496 pod_ready.go:93] pod "etcd-multinode-108000" in "kube-system" namespace has status "Ready":"True"
	I0926 18:16:08.720045    5496 pod_ready.go:82] duration metric: took 3.033233ms for pod "etcd-multinode-108000" in "kube-system" namespace to be "Ready" ...
	I0926 18:16:08.720056    5496 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-108000" in "kube-system" namespace to be "Ready" ...
	I0926 18:16:08.720092    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-108000
	I0926 18:16:08.720097    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:08.720102    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:08.720106    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:08.721268    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:08.721274    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:08.721279    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:08.721281    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:08 GMT
	I0926 18:16:08.721285    5496 round_trippers.go:580]     Audit-Id: 3b3dc1f7-a39b-4e8b-a18f-b508b0ba1b76
	I0926 18:16:08.721288    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:08.721290    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:08.721292    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:08.721466    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-108000","namespace":"kube-system","uid":"b8011715-128c-4dfc-94b7-cc9c04907c8a","resourceVersion":"1324","creationTimestamp":"2024-09-27T01:08:53Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.14:8443","kubernetes.io/config.hash":"3b2e0fdf454135a81bc6cacb88271d66","kubernetes.io/config.mirror":"3b2e0fdf454135a81bc6cacb88271d66","kubernetes.io/config.seen":"2024-09-27T01:08:53.027447712Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7892 chars]
	I0926 18:16:08.721703    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:16:08.721709    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:08.721715    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:08.721718    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:08.722770    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:08.722783    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:08.722788    5496 round_trippers.go:580]     Audit-Id: 2ddd8768-a8e3-4ead-9322-d4bd19be6dac
	I0926 18:16:08.722792    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:08.722794    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:08.722797    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:08.722801    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:08.722804    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:08 GMT
	I0926 18:16:08.723100    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:16:08.723275    5496 pod_ready.go:93] pod "kube-apiserver-multinode-108000" in "kube-system" namespace has status "Ready":"True"
	I0926 18:16:08.723282    5496 pod_ready.go:82] duration metric: took 3.221367ms for pod "kube-apiserver-multinode-108000" in "kube-system" namespace to be "Ready" ...
	I0926 18:16:08.723288    5496 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-108000" in "kube-system" namespace to be "Ready" ...
	I0926 18:16:08.723319    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-108000
	I0926 18:16:08.723324    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:08.723329    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:08.723332    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:08.724429    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:08.724436    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:08.724441    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:08.724456    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:08.724462    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:08.724470    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:08.724473    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:08 GMT
	I0926 18:16:08.724475    5496 round_trippers.go:580]     Audit-Id: 935f71ba-741e-4b6c-baa8-8880af499c49
	I0926 18:16:08.724752    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-108000","namespace":"kube-system","uid":"42fac17d-5eda-41e8-8747-902b605e747f","resourceVersion":"1343","creationTimestamp":"2024-09-27T01:08:53Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fec5fbfbd6a0fb8784a74d22da6a6ca2","kubernetes.io/config.mirror":"fec5fbfbd6a0fb8784a74d22da6a6ca2","kubernetes.io/config.seen":"2024-09-27T01:08:53.027448437Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7465 chars]
	I0926 18:16:08.724969    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:16:08.724975    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:08.724980    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:08.724985    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:08.726136    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:08.726143    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:08.726149    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:08.726155    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:08.726160    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:08.726171    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:08.726173    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:08 GMT
	I0926 18:16:08.726176    5496 round_trippers.go:580]     Audit-Id: c1b31b81-6e52-41b9-82f9-a973a2ea460a
	I0926 18:16:08.726293    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:16:08.726466    5496 pod_ready.go:93] pod "kube-controller-manager-multinode-108000" in "kube-system" namespace has status "Ready":"True"
	I0926 18:16:08.726474    5496 pod_ready.go:82] duration metric: took 3.181952ms for pod "kube-controller-manager-multinode-108000" in "kube-system" namespace to be "Ready" ...
	I0926 18:16:08.726481    5496 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9kjdl" in "kube-system" namespace to be "Ready" ...
	I0926 18:16:08.726520    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9kjdl
	I0926 18:16:08.726525    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:08.726530    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:08.726534    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:08.727483    5496 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0926 18:16:08.727490    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:08.727495    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:08 GMT
	I0926 18:16:08.727508    5496 round_trippers.go:580]     Audit-Id: 8925fcb2-83f8-4fc0-b6c2-47cfb2296bdd
	I0926 18:16:08.727514    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:08.727517    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:08.727520    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:08.727522    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:08.727634    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9kjdl","generateName":"kube-proxy-","namespace":"kube-system","uid":"979606a2-6bc4-46c0-8333-000bc25722f3","resourceVersion":"1316","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ec618504-e724-4298-bf49-56d9dcae8b25","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec618504-e724-4298-bf49-56d9dcae8b25\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6395 chars]
	I0926 18:16:08.727865    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:16:08.727872    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:08.727877    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:08.727880    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:08.728757    5496 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0926 18:16:08.728765    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:08.728772    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:08.728777    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:08.728791    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:08.728796    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:08 GMT
	I0926 18:16:08.728799    5496 round_trippers.go:580]     Audit-Id: 18a998c6-a62e-431e-a427-1d957ad8d6a5
	I0926 18:16:08.728801    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:08.728892    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:16:08.729055    5496 pod_ready.go:93] pod "kube-proxy-9kjdl" in "kube-system" namespace has status "Ready":"True"
	I0926 18:16:08.729063    5496 pod_ready.go:82] duration metric: took 2.576896ms for pod "kube-proxy-9kjdl" in "kube-system" namespace to be "Ready" ...
	I0926 18:16:08.729068    5496 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ngs2x" in "kube-system" namespace to be "Ready" ...
	I0926 18:16:08.913415    5496 request.go:632] Waited for 184.23754ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ngs2x
	I0926 18:16:08.913463    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ngs2x
	I0926 18:16:08.913471    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:08.913486    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:08.913496    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:08.916020    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:16:08.916036    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:08.916043    5496 round_trippers.go:580]     Audit-Id: 2a06992a-a8b1-4882-a62c-ae06d1490485
	I0926 18:16:08.916048    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:08.916051    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:08.916054    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:08.916076    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:08.916083    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:09 GMT
	I0926 18:16:08.916253    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ngs2x","generateName":"kube-proxy-","namespace":"kube-system","uid":"f95c0316-b4a8-4f0c-a90b-a88af50fbc68","resourceVersion":"1040","creationTimestamp":"2024-09-27T01:09:40Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ec618504-e724-4298-bf49-56d9dcae8b25","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:09:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec618504-e724-4298-bf49-56d9dcae8b25\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6198 chars]
	I0926 18:16:09.114275    5496 request.go:632] Waited for 197.563469ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-108000-m02
	I0926 18:16:09.114347    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000-m02
	I0926 18:16:09.114356    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:09.114369    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:09.114376    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:09.116858    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:16:09.116875    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:09.116884    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:09.116905    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:09.116911    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:09 GMT
	I0926 18:16:09.116915    5496 round_trippers.go:580]     Audit-Id: 61c040fa-5156-4d6f-ae55-9bf815c5c22a
	I0926 18:16:09.116918    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:09.116921    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:09.117118    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000-m02","uid":"653db940-78e0-431e-befd-25309d2a6cc8","resourceVersion":"1071","creationTimestamp":"2024-09-27T01:13:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_26T18_13_29_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:13:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3812 chars]
	I0926 18:16:09.117352    5496 pod_ready.go:93] pod "kube-proxy-ngs2x" in "kube-system" namespace has status "Ready":"True"
	I0926 18:16:09.117363    5496 pod_ready.go:82] duration metric: took 388.284535ms for pod "kube-proxy-ngs2x" in "kube-system" namespace to be "Ready" ...
	I0926 18:16:09.117372    5496 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pwrqj" in "kube-system" namespace to be "Ready" ...
	I0926 18:16:09.313915    5496 request.go:632] Waited for 196.494719ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pwrqj
	I0926 18:16:09.314006    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pwrqj
	I0926 18:16:09.314017    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:09.314031    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:09.314039    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:09.317214    5496 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 18:16:09.317231    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:09.317238    5496 round_trippers.go:580]     Audit-Id: 1320ad3b-29df-4788-b7ef-2e12a77ead86
	I0926 18:16:09.317243    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:09.317246    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:09.317249    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:09.317253    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:09.317257    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:09 GMT
	I0926 18:16:09.317416    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pwrqj","generateName":"kube-proxy-","namespace":"kube-system","uid":"dfc98f0e-705d-41fd-a871-9d4f8455b11d","resourceVersion":"1158","creationTimestamp":"2024-09-27T01:10:31Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ec618504-e724-4298-bf49-56d9dcae8b25","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:10:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec618504-e724-4298-bf49-56d9dcae8b25\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6198 chars]
	I0926 18:16:09.513510    5496 request.go:632] Waited for 195.70342ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-108000-m03
	I0926 18:16:09.513623    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000-m03
	I0926 18:16:09.513636    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:09.513648    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:09.513655    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:09.516445    5496 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0926 18:16:09.516461    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:09.516469    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:09.516473    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:09.516486    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:09.516490    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:09.516495    5496 round_trippers.go:580]     Content-Length: 210
	I0926 18:16:09.516499    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:09 GMT
	I0926 18:16:09.516503    5496 round_trippers.go:580]     Audit-Id: b84c6d4d-7ff7-40c3-a888-51c82f59b474
	I0926 18:16:09.516522    5496 request.go:1351] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-108000-m03\" not found","reason":"NotFound","details":{"name":"multinode-108000-m03","kind":"nodes"},"code":404}
	I0926 18:16:09.516586    5496 pod_ready.go:98] node "multinode-108000-m03" hosting pod "kube-proxy-pwrqj" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-108000-m03": nodes "multinode-108000-m03" not found
	I0926 18:16:09.516599    5496 pod_ready.go:82] duration metric: took 399.215483ms for pod "kube-proxy-pwrqj" in "kube-system" namespace to be "Ready" ...
	E0926 18:16:09.516607    5496 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-108000-m03" hosting pod "kube-proxy-pwrqj" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-108000-m03": nodes "multinode-108000-m03" not found
	I0926 18:16:09.516614    5496 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-108000" in "kube-system" namespace to be "Ready" ...
	I0926 18:16:09.713207    5496 request.go:632] Waited for 196.494898ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-108000
	I0926 18:16:09.713288    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-108000
	I0926 18:16:09.713301    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:09.713317    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:09.713326    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:09.716159    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:16:09.716177    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:09.716184    5496 round_trippers.go:580]     Audit-Id: 3e9f3964-2f01-4f4c-866a-5e4f7f2fe5d2
	I0926 18:16:09.716190    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:09.716193    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:09.716197    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:09.716220    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:09.716228    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:09 GMT
	I0926 18:16:09.716357    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-108000","namespace":"kube-system","uid":"e5b482e0-154d-4620-8f24-1ebf181b9c1b","resourceVersion":"1335","creationTimestamp":"2024-09-27T01:08:53Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"40cf241c42ae492b94bc92cec52f27f4","kubernetes.io/config.mirror":"40cf241c42ae492b94bc92cec52f27f4","kubernetes.io/config.seen":"2024-09-27T01:08:53.027449029Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5195 chars]
	I0926 18:16:09.913068    5496 request.go:632] Waited for 196.314136ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:16:09.913119    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:16:09.913128    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:09.913170    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:09.913181    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:09.915980    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:16:09.915994    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:09.916002    5496 round_trippers.go:580]     Audit-Id: 9a9c499c-1a25-438c-9be3-9aa603be7aa5
	I0926 18:16:09.916006    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:09.916035    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:09.916048    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:09.916051    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:09.916056    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:10 GMT
	I0926 18:16:09.916155    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:16:09.916423    5496 pod_ready.go:93] pod "kube-scheduler-multinode-108000" in "kube-system" namespace has status "Ready":"True"
	I0926 18:16:09.916434    5496 pod_ready.go:82] duration metric: took 399.807036ms for pod "kube-scheduler-multinode-108000" in "kube-system" namespace to be "Ready" ...
	I0926 18:16:09.916443    5496 pod_ready.go:39] duration metric: took 13.709820236s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0926 18:16:09.916458    5496 api_server.go:52] waiting for apiserver process to appear ...
	I0926 18:16:09.916533    5496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 18:16:09.929940    5496 command_runner.go:130] > 1714
	I0926 18:16:09.929978    5496 api_server.go:72] duration metric: took 30.506092435s to wait for apiserver process to appear ...
	I0926 18:16:09.929985    5496 api_server.go:88] waiting for apiserver healthz status ...
	I0926 18:16:09.929997    5496 api_server.go:253] Checking apiserver healthz at https://192.169.0.14:8443/healthz ...
	I0926 18:16:09.933731    5496 api_server.go:279] https://192.169.0.14:8443/healthz returned 200:
	ok
	I0926 18:16:09.933761    5496 round_trippers.go:463] GET https://192.169.0.14:8443/version
	I0926 18:16:09.933767    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:09.933772    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:09.933777    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:09.934445    5496 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0926 18:16:09.934453    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:09.934458    5496 round_trippers.go:580]     Content-Length: 263
	I0926 18:16:09.934461    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:10 GMT
	I0926 18:16:09.934465    5496 round_trippers.go:580]     Audit-Id: 5c340580-5c39-47b5-a356-133075a6df60
	I0926 18:16:09.934468    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:09.934470    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:09.934474    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:09.934476    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:09.934520    5496 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.1",
	  "gitCommit": "948afe5ca072329a73c8e79ed5938717a5cb3d21",
	  "gitTreeState": "clean",
	  "buildDate": "2024-09-11T21:22:08Z",
	  "goVersion": "go1.22.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0926 18:16:09.934547    5496 api_server.go:141] control plane version: v1.31.1
	I0926 18:16:09.934558    5496 api_server.go:131] duration metric: took 4.566184ms to wait for apiserver health ...
	I0926 18:16:09.934564    5496 system_pods.go:43] waiting for kube-system pods to appear ...
	I0926 18:16:10.113446    5496 request.go:632] Waited for 178.818116ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods
	I0926 18:16:10.113564    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods
	I0926 18:16:10.113575    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:10.113586    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:10.113597    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:10.117452    5496 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 18:16:10.117472    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:10.117483    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:10.117501    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:10.117508    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:10.117516    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:10 GMT
	I0926 18:16:10.117523    5496 round_trippers.go:580]     Audit-Id: a1510877-fe88-4671-851a-7550c754986d
	I0926 18:16:10.117529    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:10.118591    5496 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1378"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1374","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 89336 chars]
	I0926 18:16:10.120628    5496 system_pods.go:59] 12 kube-system pods found
	I0926 18:16:10.120638    5496 system_pods.go:61] "coredns-7c65d6cfc9-hxdhm" [ff9bbfa0-9278-44d7-abc5-7a38ed77ce23] Running
	I0926 18:16:10.120642    5496 system_pods.go:61] "etcd-multinode-108000" [2a5e99f4-416d-4d75-acd2-33231f5f780d] Running
	I0926 18:16:10.120645    5496 system_pods.go:61] "kindnet-ktwmw" [5065643a-e9ee-44a6-a05d-b9154074dd84] Running
	I0926 18:16:10.120651    5496 system_pods.go:61] "kindnet-qlv2x" [08c7f9d2-c689-40b5-95fc-a48157150778] Running
	I0926 18:16:10.120655    5496 system_pods.go:61] "kindnet-wbk29" [a9ff7c3f-b5e1-40e5-ab9d-a38e2696988f] Running
	I0926 18:16:10.120658    5496 system_pods.go:61] "kube-apiserver-multinode-108000" [b8011715-128c-4dfc-94b7-cc9c04907c8a] Running
	I0926 18:16:10.120662    5496 system_pods.go:61] "kube-controller-manager-multinode-108000" [42fac17d-5eda-41e8-8747-902b605e747f] Running
	I0926 18:16:10.120664    5496 system_pods.go:61] "kube-proxy-9kjdl" [979606a2-6bc4-46c0-8333-000bc25722f3] Running
	I0926 18:16:10.120667    5496 system_pods.go:61] "kube-proxy-ngs2x" [f95c0316-b4a8-4f0c-a90b-a88af50fbc68] Running
	I0926 18:16:10.120669    5496 system_pods.go:61] "kube-proxy-pwrqj" [dfc98f0e-705d-41fd-a871-9d4f8455b11d] Running
	I0926 18:16:10.120672    5496 system_pods.go:61] "kube-scheduler-multinode-108000" [e5b482e0-154d-4620-8f24-1ebf181b9c1b] Running
	I0926 18:16:10.120676    5496 system_pods.go:61] "storage-provisioner" [e67377e5-f7c5-4625-9739-3703de1f4739] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0926 18:16:10.120681    5496 system_pods.go:74] duration metric: took 186.111068ms to wait for pod list to return data ...
	I0926 18:16:10.120686    5496 default_sa.go:34] waiting for default service account to be created ...
	I0926 18:16:10.312196    5496 request.go:632] Waited for 191.450108ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/default/serviceaccounts
	I0926 18:16:10.312274    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/default/serviceaccounts
	I0926 18:16:10.312282    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:10.312289    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:10.312293    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:10.314931    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:16:10.314940    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:10.314945    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:10.314950    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:10.314952    5496 round_trippers.go:580]     Content-Length: 262
	I0926 18:16:10.314955    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:10 GMT
	I0926 18:16:10.314958    5496 round_trippers.go:580]     Audit-Id: 0579afb4-d182-49f3-824c-63d92338701e
	I0926 18:16:10.314970    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:10.314973    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:10.314983    5496 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1378"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"2124ff28-6fda-431f-9782-123cd032ca69","resourceVersion":"363","creationTimestamp":"2024-09-27T01:08:58Z"}}]}
	I0926 18:16:10.315096    5496 default_sa.go:45] found service account: "default"
	I0926 18:16:10.315105    5496 default_sa.go:55] duration metric: took 194.411667ms for default service account to be created ...
	I0926 18:16:10.315129    5496 system_pods.go:116] waiting for k8s-apps to be running ...
	I0926 18:16:10.512752    5496 request.go:632] Waited for 197.563272ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods
	I0926 18:16:10.512887    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods
	I0926 18:16:10.512898    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:10.512909    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:10.512915    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:10.516171    5496 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 18:16:10.516188    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:10.516196    5496 round_trippers.go:580]     Audit-Id: ea9e0f75-fb7e-41d9-98f6-04bd29f02b8d
	I0926 18:16:10.516203    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:10.516208    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:10.516213    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:10.516218    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:10.516223    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:10 GMT
	I0926 18:16:10.517179    5496 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1378"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1374","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 89336 chars]
	I0926 18:16:10.519162    5496 system_pods.go:86] 12 kube-system pods found
	I0926 18:16:10.519173    5496 system_pods.go:89] "coredns-7c65d6cfc9-hxdhm" [ff9bbfa0-9278-44d7-abc5-7a38ed77ce23] Running
	I0926 18:16:10.519177    5496 system_pods.go:89] "etcd-multinode-108000" [2a5e99f4-416d-4d75-acd2-33231f5f780d] Running
	I0926 18:16:10.519185    5496 system_pods.go:89] "kindnet-ktwmw" [5065643a-e9ee-44a6-a05d-b9154074dd84] Running
	I0926 18:16:10.519189    5496 system_pods.go:89] "kindnet-qlv2x" [08c7f9d2-c689-40b5-95fc-a48157150778] Running
	I0926 18:16:10.519192    5496 system_pods.go:89] "kindnet-wbk29" [a9ff7c3f-b5e1-40e5-ab9d-a38e2696988f] Running
	I0926 18:16:10.519195    5496 system_pods.go:89] "kube-apiserver-multinode-108000" [b8011715-128c-4dfc-94b7-cc9c04907c8a] Running
	I0926 18:16:10.519198    5496 system_pods.go:89] "kube-controller-manager-multinode-108000" [42fac17d-5eda-41e8-8747-902b605e747f] Running
	I0926 18:16:10.519201    5496 system_pods.go:89] "kube-proxy-9kjdl" [979606a2-6bc4-46c0-8333-000bc25722f3] Running
	I0926 18:16:10.519204    5496 system_pods.go:89] "kube-proxy-ngs2x" [f95c0316-b4a8-4f0c-a90b-a88af50fbc68] Running
	I0926 18:16:10.519208    5496 system_pods.go:89] "kube-proxy-pwrqj" [dfc98f0e-705d-41fd-a871-9d4f8455b11d] Running
	I0926 18:16:10.519212    5496 system_pods.go:89] "kube-scheduler-multinode-108000" [e5b482e0-154d-4620-8f24-1ebf181b9c1b] Running
	I0926 18:16:10.519216    5496 system_pods.go:89] "storage-provisioner" [e67377e5-f7c5-4625-9739-3703de1f4739] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0926 18:16:10.519222    5496 system_pods.go:126] duration metric: took 204.085161ms to wait for k8s-apps to be running ...
	I0926 18:16:10.519230    5496 system_svc.go:44] waiting for kubelet service to be running ....
	I0926 18:16:10.519290    5496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 18:16:10.531238    5496 system_svc.go:56] duration metric: took 12.005812ms WaitForService to wait for kubelet
	I0926 18:16:10.531252    5496 kubeadm.go:582] duration metric: took 31.107358282s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 18:16:10.531263    5496 node_conditions.go:102] verifying NodePressure condition ...
	I0926 18:16:10.712625    5496 request.go:632] Waited for 181.265839ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes
	I0926 18:16:10.712727    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes
	I0926 18:16:10.712739    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:10.712750    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:10.712759    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:10.716020    5496 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 18:16:10.716036    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:10.716043    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:10 GMT
	I0926 18:16:10.716046    5496 round_trippers.go:580]     Audit-Id: ede53581-e306-437d-9089-b442b44b2546
	I0926 18:16:10.716050    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:10.716053    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:10.716056    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:10.716060    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:10.716193    5496 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1378"},"items":[{"metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 10031 chars]
	I0926 18:16:10.716590    5496 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0926 18:16:10.716601    5496 node_conditions.go:123] node cpu capacity is 2
	I0926 18:16:10.716610    5496 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0926 18:16:10.716624    5496 node_conditions.go:123] node cpu capacity is 2
	I0926 18:16:10.716630    5496 node_conditions.go:105] duration metric: took 185.361562ms to run NodePressure ...
	I0926 18:16:10.716640    5496 start.go:241] waiting for startup goroutines ...
	I0926 18:16:10.716648    5496 start.go:246] waiting for cluster config update ...
	I0926 18:16:10.716656    5496 start.go:255] writing updated cluster config ...
	I0926 18:16:10.740326    5496 out.go:201] 
	I0926 18:16:10.762941    5496 config.go:182] Loaded profile config "multinode-108000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 18:16:10.763067    5496 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/multinode-108000/config.json ...
	I0926 18:16:10.785642    5496 out.go:177] * Starting "multinode-108000-m02" worker node in "multinode-108000" cluster
	I0926 18:16:10.828316    5496 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 18:16:10.828348    5496 cache.go:56] Caching tarball of preloaded images
	I0926 18:16:10.828562    5496 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0926 18:16:10.828582    5496 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 18:16:10.828729    5496 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/multinode-108000/config.json ...
	I0926 18:16:10.829651    5496 start.go:360] acquireMachinesLock for multinode-108000-m02: {Name:mk62b0ec9b6170d1971239573141a625c4a2621e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:16:10.829736    5496 start.go:364] duration metric: took 66.242µs to acquireMachinesLock for "multinode-108000-m02"
	I0926 18:16:10.829755    5496 start.go:96] Skipping create...Using existing machine configuration
	I0926 18:16:10.829761    5496 fix.go:54] fixHost starting: m02
	I0926 18:16:10.830111    5496 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 18:16:10.830139    5496 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 18:16:10.839542    5496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53395
	I0926 18:16:10.840007    5496 main.go:141] libmachine: () Calling .GetVersion
	I0926 18:16:10.840355    5496 main.go:141] libmachine: Using API Version  1
	I0926 18:16:10.840367    5496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 18:16:10.840583    5496 main.go:141] libmachine: () Calling .GetMachineName
	I0926 18:16:10.840708    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .DriverName
	I0926 18:16:10.840795    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetState
	I0926 18:16:10.840893    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:16:10.840974    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | hyperkit pid from json: 5421
	I0926 18:16:10.841906    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | hyperkit pid 5421 missing from process table
	I0926 18:16:10.841940    5496 fix.go:112] recreateIfNeeded on multinode-108000-m02: state=Stopped err=<nil>
	I0926 18:16:10.841948    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .DriverName
	W0926 18:16:10.842035    5496 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 18:16:10.884340    5496 out.go:177] * Restarting existing hyperkit VM for "multinode-108000-m02" ...
	I0926 18:16:10.905397    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .Start
	I0926 18:16:10.905658    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:16:10.905745    5496 main.go:141] libmachine: (multinode-108000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/hyperkit.pid
	I0926 18:16:10.907373    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | hyperkit pid 5421 missing from process table
	I0926 18:16:10.907399    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | pid 5421 is in state "Stopped"
	I0926 18:16:10.907425    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/hyperkit.pid...
	I0926 18:16:10.907804    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | Using UUID e259e2c5-bca0-4baf-a344-b5e82f91b394
	I0926 18:16:10.936208    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | Generated MAC ee:f:11:b8:c4:d4
	I0926 18:16:10.936235    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-108000
	I0926 18:16:10.936402    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | 2024/09/26 18:16:10 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e259e2c5-bca0-4baf-a344-b5e82f91b394", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aac00)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0926 18:16:10.936438    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | 2024/09/26 18:16:10 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e259e2c5-bca0-4baf-a344-b5e82f91b394", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aac00)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0926 18:16:10.936545    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | 2024/09/26 18:16:10 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "e259e2c5-bca0-4baf-a344-b5e82f91b394", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/multinode-108000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/bzimage,/Users/j
enkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-108000"}
	I0926 18:16:10.936616    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | 2024/09/26 18:16:10 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U e259e2c5-bca0-4baf-a344-b5e82f91b394 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/multinode-108000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/bzimage,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/mult
inode-108000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-108000"
	I0926 18:16:10.936644    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | 2024/09/26 18:16:10 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0926 18:16:10.938132    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | 2024/09/26 18:16:10 DEBUG: hyperkit: Pid is 5532
	I0926 18:16:10.938620    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | Attempt 0
	I0926 18:16:10.938633    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:16:10.938691    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | hyperkit pid from json: 5532
	I0926 18:16:10.940910    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | Searching for ee:f:11:b8:c4:d4 in /var/db/dhcpd_leases ...
	I0926 18:16:10.940974    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0926 18:16:10.941019    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:16:10.941052    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:16:10.941090    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f75815}
	I0926 18:16:10.941107    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetConfigRaw
	I0926 18:16:10.941107    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | Found match: ee:f:11:b8:c4:d4
	I0926 18:16:10.941164    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | IP: 192.169.0.15
	I0926 18:16:10.941862    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetIP
	I0926 18:16:10.942059    5496 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/multinode-108000/config.json ...
	I0926 18:16:10.942737    5496 machine.go:93] provisionDockerMachine start ...
	I0926 18:16:10.942749    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .DriverName
	I0926 18:16:10.942906    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHHostname
	I0926 18:16:10.943008    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHPort
	I0926 18:16:10.943101    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHKeyPath
	I0926 18:16:10.943238    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHKeyPath
	I0926 18:16:10.943327    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHUsername
	I0926 18:16:10.943460    5496 main.go:141] libmachine: Using SSH client type: native
	I0926 18:16:10.943664    5496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc5b7d00] 0xc5ba9e0 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0926 18:16:10.943672    5496 main.go:141] libmachine: About to run SSH command:
	hostname
	I0926 18:16:10.946770    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | 2024/09/26 18:16:10 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I0926 18:16:10.955292    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | 2024/09/26 18:16:10 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0926 18:16:10.956603    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | 2024/09/26 18:16:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 18:16:10.956628    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | 2024/09/26 18:16:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 18:16:10.956642    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | 2024/09/26 18:16:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 18:16:10.956655    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | 2024/09/26 18:16:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 18:16:11.342405    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | 2024/09/26 18:16:11 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0926 18:16:11.342417    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | 2024/09/26 18:16:11 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0926 18:16:11.457102    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | 2024/09/26 18:16:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 18:16:11.457120    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | 2024/09/26 18:16:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 18:16:11.457194    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | 2024/09/26 18:16:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 18:16:11.457223    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | 2024/09/26 18:16:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 18:16:11.457964    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | 2024/09/26 18:16:11 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0926 18:16:11.457973    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | 2024/09/26 18:16:11 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0926 18:16:17.102016    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | 2024/09/26 18:16:17 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0926 18:16:17.102068    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | 2024/09/26 18:16:17 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0926 18:16:17.102076    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | 2024/09/26 18:16:17 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0926 18:16:17.126475    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | 2024/09/26 18:16:17 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0926 18:16:21.114075    5496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.169.0.15:22: connect: connection refused
	I0926 18:16:24.168910    5496 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0926 18:16:24.168927    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetMachineName
	I0926 18:16:24.169054    5496 buildroot.go:166] provisioning hostname "multinode-108000-m02"
	I0926 18:16:24.169065    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetMachineName
	I0926 18:16:24.169159    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHHostname
	I0926 18:16:24.169251    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHPort
	I0926 18:16:24.169357    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHKeyPath
	I0926 18:16:24.169436    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHKeyPath
	I0926 18:16:24.169520    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHUsername
	I0926 18:16:24.169682    5496 main.go:141] libmachine: Using SSH client type: native
	I0926 18:16:24.169825    5496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc5b7d00] 0xc5ba9e0 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0926 18:16:24.169834    5496 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-108000-m02 && echo "multinode-108000-m02" | sudo tee /etc/hostname
	I0926 18:16:24.230948    5496 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-108000-m02
	
	I0926 18:16:24.230975    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHHostname
	I0926 18:16:24.231113    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHPort
	I0926 18:16:24.231215    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHKeyPath
	I0926 18:16:24.231304    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHKeyPath
	I0926 18:16:24.231397    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHUsername
	I0926 18:16:24.231531    5496 main.go:141] libmachine: Using SSH client type: native
	I0926 18:16:24.231674    5496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc5b7d00] 0xc5ba9e0 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0926 18:16:24.231686    5496 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-108000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-108000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-108000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0926 18:16:24.289165    5496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 18:16:24.289186    5496 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19711-1128/.minikube CaCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19711-1128/.minikube}
	I0926 18:16:24.289196    5496 buildroot.go:174] setting up certificates
	I0926 18:16:24.289203    5496 provision.go:84] configureAuth start
	I0926 18:16:24.289211    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetMachineName
	I0926 18:16:24.289341    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetIP
	I0926 18:16:24.289455    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHHostname
	I0926 18:16:24.289535    5496 provision.go:143] copyHostCerts
	I0926 18:16:24.289565    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem
	I0926 18:16:24.289626    5496 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem, removing ...
	I0926 18:16:24.289631    5496 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem
	I0926 18:16:24.289779    5496 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem (1082 bytes)
	I0926 18:16:24.289981    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem
	I0926 18:16:24.290020    5496 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem, removing ...
	I0926 18:16:24.290026    5496 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem
	I0926 18:16:24.290112    5496 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem (1123 bytes)
	I0926 18:16:24.290254    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem
	I0926 18:16:24.290293    5496 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem, removing ...
	I0926 18:16:24.290299    5496 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem
	I0926 18:16:24.290380    5496 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem (1675 bytes)
	I0926 18:16:24.290524    5496 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem org=jenkins.multinode-108000-m02 san=[127.0.0.1 192.169.0.15 localhost minikube multinode-108000-m02]
	I0926 18:16:24.366522    5496 provision.go:177] copyRemoteCerts
	I0926 18:16:24.366572    5496 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0926 18:16:24.366585    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHHostname
	I0926 18:16:24.366716    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHPort
	I0926 18:16:24.366822    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHKeyPath
	I0926 18:16:24.366914    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHUsername
	I0926 18:16:24.367000    5496 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/id_rsa Username:docker}
	I0926 18:16:24.398912    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0926 18:16:24.398982    5496 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0926 18:16:24.417796    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0926 18:16:24.417875    5496 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0926 18:16:24.436574    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0926 18:16:24.436637    5496 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0926 18:16:24.455630    5496 provision.go:87] duration metric: took 166.418573ms to configureAuth
	I0926 18:16:24.455642    5496 buildroot.go:189] setting minikube options for container-runtime
	I0926 18:16:24.455800    5496 config.go:182] Loaded profile config "multinode-108000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 18:16:24.455814    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .DriverName
	I0926 18:16:24.455958    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHHostname
	I0926 18:16:24.456056    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHPort
	I0926 18:16:24.456142    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHKeyPath
	I0926 18:16:24.456215    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHKeyPath
	I0926 18:16:24.456298    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHUsername
	I0926 18:16:24.456433    5496 main.go:141] libmachine: Using SSH client type: native
	I0926 18:16:24.456556    5496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc5b7d00] 0xc5ba9e0 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0926 18:16:24.456563    5496 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0926 18:16:24.508014    5496 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0926 18:16:24.508027    5496 buildroot.go:70] root file system type: tmpfs
	I0926 18:16:24.508111    5496 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0926 18:16:24.508127    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHHostname
	I0926 18:16:24.508265    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHPort
	I0926 18:16:24.508363    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHKeyPath
	I0926 18:16:24.508438    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHKeyPath
	I0926 18:16:24.508531    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHUsername
	I0926 18:16:24.508667    5496 main.go:141] libmachine: Using SSH client type: native
	I0926 18:16:24.508806    5496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc5b7d00] 0xc5ba9e0 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0926 18:16:24.508850    5496 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.14"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0926 18:16:24.570548    5496 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.14
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0926 18:16:24.570570    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHHostname
	I0926 18:16:24.570708    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHPort
	I0926 18:16:24.570797    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHKeyPath
	I0926 18:16:24.570893    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHKeyPath
	I0926 18:16:24.570983    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHUsername
	I0926 18:16:24.571119    5496 main.go:141] libmachine: Using SSH client type: native
	I0926 18:16:24.571257    5496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc5b7d00] 0xc5ba9e0 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0926 18:16:24.571269    5496 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0926 18:16:26.151246    5496 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0926 18:16:26.151261    5496 machine.go:96] duration metric: took 15.208338586s to provisionDockerMachine
	I0926 18:16:26.151275    5496 start.go:293] postStartSetup for "multinode-108000-m02" (driver="hyperkit")
	I0926 18:16:26.151282    5496 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0926 18:16:26.151292    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .DriverName
	I0926 18:16:26.151502    5496 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0926 18:16:26.151516    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHHostname
	I0926 18:16:26.151624    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHPort
	I0926 18:16:26.151720    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHKeyPath
	I0926 18:16:26.151803    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHUsername
	I0926 18:16:26.151887    5496 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/id_rsa Username:docker}
	I0926 18:16:26.190229    5496 ssh_runner.go:195] Run: cat /etc/os-release
	I0926 18:16:26.194730    5496 command_runner.go:130] > NAME=Buildroot
	I0926 18:16:26.194741    5496 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0926 18:16:26.194745    5496 command_runner.go:130] > ID=buildroot
	I0926 18:16:26.194748    5496 command_runner.go:130] > VERSION_ID=2023.02.9
	I0926 18:16:26.194770    5496 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0926 18:16:26.194949    5496 info.go:137] Remote host: Buildroot 2023.02.9
	I0926 18:16:26.194961    5496 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1128/.minikube/addons for local assets ...
	I0926 18:16:26.195080    5496 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1128/.minikube/files for local assets ...
	I0926 18:16:26.195261    5496 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> 16792.pem in /etc/ssl/certs
	I0926 18:16:26.195267    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> /etc/ssl/certs/16792.pem
	I0926 18:16:26.195477    5496 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0926 18:16:26.205182    5496 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem --> /etc/ssl/certs/16792.pem (1708 bytes)
	I0926 18:16:26.236499    5496 start.go:296] duration metric: took 85.214566ms for postStartSetup
	I0926 18:16:26.236519    5496 fix.go:56] duration metric: took 15.406578543s for fixHost
	I0926 18:16:26.236535    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHHostname
	I0926 18:16:26.236660    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHPort
	I0926 18:16:26.236739    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHKeyPath
	I0926 18:16:26.236846    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHKeyPath
	I0926 18:16:26.236930    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHUsername
	I0926 18:16:26.237066    5496 main.go:141] libmachine: Using SSH client type: native
	I0926 18:16:26.237215    5496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc5b7d00] 0xc5ba9e0 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0926 18:16:26.237222    5496 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0926 18:16:26.289010    5496 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727399786.357757601
	
	I0926 18:16:26.289021    5496 fix.go:216] guest clock: 1727399786.357757601
	I0926 18:16:26.289026    5496 fix.go:229] Guest: 2024-09-26 18:16:26.357757601 -0700 PDT Remote: 2024-09-26 18:16:26.236525 -0700 PDT m=+75.467120085 (delta=121.232601ms)
	I0926 18:16:26.289036    5496 fix.go:200] guest clock delta is within tolerance: 121.232601ms
	I0926 18:16:26.289040    5496 start.go:83] releasing machines lock for "multinode-108000-m02", held for 15.459115782s
	I0926 18:16:26.289057    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .DriverName
	I0926 18:16:26.289184    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetIP
	I0926 18:16:26.318797    5496 out.go:177] * Found network options:
	I0926 18:16:26.338485    5496 out.go:177]   - NO_PROXY=192.169.0.14
	W0926 18:16:26.375452    5496 proxy.go:119] fail to check proxy env: Error ip not in block
	I0926 18:16:26.375479    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .DriverName
	I0926 18:16:26.375952    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .DriverName
	I0926 18:16:26.376076    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .DriverName
	I0926 18:16:26.376176    5496 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	W0926 18:16:26.376179    5496 proxy.go:119] fail to check proxy env: Error ip not in block
	I0926 18:16:26.376196    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHHostname
	I0926 18:16:26.376244    5496 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0926 18:16:26.376254    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHHostname
	I0926 18:16:26.376311    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHPort
	I0926 18:16:26.376385    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHPort
	I0926 18:16:26.376424    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHKeyPath
	I0926 18:16:26.376521    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHUsername
	I0926 18:16:26.376537    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHKeyPath
	I0926 18:16:26.376633    5496 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/id_rsa Username:docker}
	I0926 18:16:26.376658    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHUsername
	I0926 18:16:26.376741    5496 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/id_rsa Username:docker}
	I0926 18:16:26.405394    5496 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0926 18:16:26.405444    5496 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0926 18:16:26.405511    5496 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0926 18:16:26.454318    5496 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0926 18:16:26.455166    5496 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0926 18:16:26.455197    5496 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0926 18:16:26.455211    5496 start.go:495] detecting cgroup driver to use...
	I0926 18:16:26.455332    5496 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 18:16:26.470885    5496 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0926 18:16:26.471259    5496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0926 18:16:26.479939    5496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0926 18:16:26.488337    5496 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0926 18:16:26.488410    5496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0926 18:16:26.496681    5496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 18:16:26.505298    5496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0926 18:16:26.513668    5496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 18:16:26.522274    5496 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0926 18:16:26.531204    5496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0926 18:16:26.539719    5496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0926 18:16:26.547937    5496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0926 18:16:26.556289    5496 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0926 18:16:26.563677    5496 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0926 18:16:26.563695    5496 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0926 18:16:26.563744    5496 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0926 18:16:26.573401    5496 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0926 18:16:26.585751    5496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 18:16:26.682695    5496 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0926 18:16:26.701736    5496 start.go:495] detecting cgroup driver to use...
	I0926 18:16:26.701814    5496 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0926 18:16:26.718260    5496 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0926 18:16:26.718770    5496 command_runner.go:130] > [Unit]
	I0926 18:16:26.718778    5496 command_runner.go:130] > Description=Docker Application Container Engine
	I0926 18:16:26.718796    5496 command_runner.go:130] > Documentation=https://docs.docker.com
	I0926 18:16:26.718802    5496 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0926 18:16:26.718809    5496 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0926 18:16:26.718818    5496 command_runner.go:130] > StartLimitBurst=3
	I0926 18:16:26.718822    5496 command_runner.go:130] > StartLimitIntervalSec=60
	I0926 18:16:26.718826    5496 command_runner.go:130] > [Service]
	I0926 18:16:26.718829    5496 command_runner.go:130] > Type=notify
	I0926 18:16:26.718833    5496 command_runner.go:130] > Restart=on-failure
	I0926 18:16:26.718836    5496 command_runner.go:130] > Environment=NO_PROXY=192.169.0.14
	I0926 18:16:26.718847    5496 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0926 18:16:26.718853    5496 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0926 18:16:26.718859    5496 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0926 18:16:26.718865    5496 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0926 18:16:26.718870    5496 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0926 18:16:26.718875    5496 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0926 18:16:26.718881    5496 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0926 18:16:26.718889    5496 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0926 18:16:26.718895    5496 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0926 18:16:26.718899    5496 command_runner.go:130] > ExecStart=
	I0926 18:16:26.718912    5496 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0926 18:16:26.718929    5496 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0926 18:16:26.718944    5496 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0926 18:16:26.718951    5496 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0926 18:16:26.718954    5496 command_runner.go:130] > LimitNOFILE=infinity
	I0926 18:16:26.718958    5496 command_runner.go:130] > LimitNPROC=infinity
	I0926 18:16:26.718962    5496 command_runner.go:130] > LimitCORE=infinity
	I0926 18:16:26.718967    5496 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0926 18:16:26.718971    5496 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0926 18:16:26.718976    5496 command_runner.go:130] > TasksMax=infinity
	I0926 18:16:26.718979    5496 command_runner.go:130] > TimeoutStartSec=0
	I0926 18:16:26.718985    5496 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0926 18:16:26.718990    5496 command_runner.go:130] > Delegate=yes
	I0926 18:16:26.718995    5496 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0926 18:16:26.719005    5496 command_runner.go:130] > KillMode=process
	I0926 18:16:26.719008    5496 command_runner.go:130] > [Install]
	I0926 18:16:26.719013    5496 command_runner.go:130] > WantedBy=multi-user.target
	I0926 18:16:26.719096    5496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 18:16:26.731985    5496 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0926 18:16:26.749935    5496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 18:16:26.761214    5496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 18:16:26.771758    5496 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0926 18:16:26.794322    5496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 18:16:26.804929    5496 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 18:16:26.819754    5496 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0926 18:16:26.820015    5496 ssh_runner.go:195] Run: which cri-dockerd
	I0926 18:16:26.822756    5496 command_runner.go:130] > /usr/bin/cri-dockerd
	I0926 18:16:26.822954    5496 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0926 18:16:26.830085    5496 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0926 18:16:26.843567    5496 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0926 18:16:26.944121    5496 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0926 18:16:27.051128    5496 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0926 18:16:27.051158    5496 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0926 18:16:27.065233    5496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 18:16:27.171138    5496 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0926 18:17:28.193406    5496 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0926 18:17:28.193420    5496 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0926 18:17:28.193431    5496 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.021705353s)
	I0926 18:17:28.193497    5496 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0926 18:17:28.203177    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0926 18:17:28.203190    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:24.939272378Z" level=info msg="Starting up"
	I0926 18:17:28.203199    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:24.939744281Z" level=info msg="containerd not running, starting managed containerd"
	I0926 18:17:28.203212    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:24.940372696Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=496
	I0926 18:17:28.203223    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.955635497Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	I0926 18:17:28.203233    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975220104Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0926 18:17:28.203245    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975290387Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0926 18:17:28.203256    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975364574Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0926 18:17:28.203265    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975401354Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0926 18:17:28.203276    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975543498Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0926 18:17:28.203286    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975598213Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0926 18:17:28.203305    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975731849Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0926 18:17:28.203314    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975772849Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0926 18:17:28.203324    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975804657Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0926 18:17:28.203334    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975834070Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0926 18:17:28.203344    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975998842Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0926 18:17:28.203353    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.976165653Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0926 18:17:28.203371    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.977740780Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0926 18:17:28.203387    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.977823231Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0926 18:17:28.203424    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.977979310Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0926 18:17:28.203438    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.978024001Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0926 18:17:28.203448    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.978133741Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0926 18:17:28.203456    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.978192781Z" level=info msg="metadata content store policy set" policy=shared
	I0926 18:17:28.203464    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.979398865Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0926 18:17:28.203473    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.979452106Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0926 18:17:28.203481    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.979487510Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0926 18:17:28.203491    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.979520613Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0926 18:17:28.203499    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.979552321Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0926 18:17:28.203508    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.979616545Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0926 18:17:28.203517    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.979877476Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0926 18:17:28.203526    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.979969253Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0926 18:17:28.203535    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980006327Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0926 18:17:28.203544    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980040846Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0926 18:17:28.203554    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980075255Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0926 18:17:28.203563    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980114319Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0926 18:17:28.203573    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980148760Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0926 18:17:28.203582    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980189045Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0926 18:17:28.203591    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980223417Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0926 18:17:28.203600    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980253164Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0926 18:17:28.203689    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980282269Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0926 18:17:28.203700    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980310608Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0926 18:17:28.203709    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980348289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0926 18:17:28.203718    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980386978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0926 18:17:28.203727    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980418532Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0926 18:17:28.203736    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980449540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0926 18:17:28.203745    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980484042Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0926 18:17:28.203754    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980514235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0926 18:17:28.203763    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980543443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0926 18:17:28.203773    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980573293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0926 18:17:28.203785    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980609651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0926 18:17:28.203794    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980646773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0926 18:17:28.203802    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980677054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0926 18:17:28.203811    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980706205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0926 18:17:28.203819    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980735214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0926 18:17:28.203829    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980766272Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0926 18:17:28.203837    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980806833Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0926 18:17:28.203846    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980838839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0926 18:17:28.203855    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980868321Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0926 18:17:28.203865    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980965209Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0926 18:17:28.203876    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981007924Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0926 18:17:28.203885    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981037680Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0926 18:17:28.204036    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981066963Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0926 18:17:28.204049    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981094655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0926 18:17:28.204060    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981124463Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0926 18:17:28.204068    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981155319Z" level=info msg="NRI interface is disabled by configuration."
	I0926 18:17:28.204076    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981325910Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0926 18:17:28.204085    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981412041Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0926 18:17:28.204093    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981496206Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0926 18:17:28.204103    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981538298Z" level=info msg="containerd successfully booted in 0.026518s"
	I0926 18:17:28.204111    5496 command_runner.go:130] > Sep 27 01:16:25 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:25.961351885Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0926 18:17:28.204119    5496 command_runner.go:130] > Sep 27 01:16:25 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:25.971609471Z" level=info msg="Loading containers: start."
	I0926 18:17:28.204137    5496 command_runner.go:130] > Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.079462380Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0926 18:17:28.204148    5496 command_runner.go:130] > Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.142922131Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0926 18:17:28.204161    5496 command_runner.go:130] > Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.187253380Z" level=warning msg="error locating sandbox id e0ffe3e7a49ab09d3d6f73ed7b97f2135141a0ed4c3e63d2a302fd9ffd181dbb: sandbox e0ffe3e7a49ab09d3d6f73ed7b97f2135141a0ed4c3e63d2a302fd9ffd181dbb not found"
	I0926 18:17:28.204171    5496 command_runner.go:130] > Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.187440681Z" level=info msg="Loading containers: done."
	I0926 18:17:28.204180    5496 command_runner.go:130] > Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.195076424Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	I0926 18:17:28.204187    5496 command_runner.go:130] > Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.195150891Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	I0926 18:17:28.204197    5496 command_runner.go:130] > Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.195197197Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	I0926 18:17:28.204204    5496 command_runner.go:130] > Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.195352314Z" level=info msg="Daemon has completed initialization"
	I0926 18:17:28.204213    5496 command_runner.go:130] > Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.217613628Z" level=info msg="API listen on /var/run/docker.sock"
	I0926 18:17:28.204220    5496 command_runner.go:130] > Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.217699368Z" level=info msg="API listen on [::]:2376"
	I0926 18:17:28.204226    5496 command_runner.go:130] > Sep 27 01:16:26 multinode-108000-m02 systemd[1]: Started Docker Application Container Engine.
	I0926 18:17:28.204236    5496 command_runner.go:130] > Sep 27 01:16:27 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:27.252125643Z" level=info msg="Processing signal 'terminated'"
	I0926 18:17:28.204267    5496 command_runner.go:130] > Sep 27 01:16:27 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:27.252968662Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0926 18:17:28.204277    5496 command_runner.go:130] > Sep 27 01:16:27 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:27.253242428Z" level=info msg="Daemon shutdown complete"
	I0926 18:17:28.204285    5496 command_runner.go:130] > Sep 27 01:16:27 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:27.253285728Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0926 18:17:28.204296    5496 command_runner.go:130] > Sep 27 01:16:27 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:27.253375422Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0926 18:17:28.204303    5496 command_runner.go:130] > Sep 27 01:16:27 multinode-108000-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0926 18:17:28.204308    5496 command_runner.go:130] > Sep 27 01:16:28 multinode-108000-m02 systemd[1]: docker.service: Deactivated successfully.
	I0926 18:17:28.204314    5496 command_runner.go:130] > Sep 27 01:16:28 multinode-108000-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0926 18:17:28.204320    5496 command_runner.go:130] > Sep 27 01:16:28 multinode-108000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0926 18:17:28.204326    5496 command_runner.go:130] > Sep 27 01:16:28 multinode-108000-m02 dockerd[907]: time="2024-09-27T01:16:28.287366515Z" level=info msg="Starting up"
	I0926 18:17:28.204336    5496 command_runner.go:130] > Sep 27 01:17:28 multinode-108000-m02 dockerd[907]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0926 18:17:28.204343    5496 command_runner.go:130] > Sep 27 01:17:28 multinode-108000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0926 18:17:28.204349    5496 command_runner.go:130] > Sep 27 01:17:28 multinode-108000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0926 18:17:28.204355    5496 command_runner.go:130] > Sep 27 01:17:28 multinode-108000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0926 18:17:28.231231    5496 out.go:201] 
	W0926 18:17:28.253111    5496 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 27 01:16:24 multinode-108000-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 27 01:16:24 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:24.939272378Z" level=info msg="Starting up"
	Sep 27 01:16:24 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:24.939744281Z" level=info msg="containerd not running, starting managed containerd"
	Sep 27 01:16:24 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:24.940372696Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=496
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.955635497Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975220104Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975290387Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975364574Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975401354Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975543498Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975598213Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975731849Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975772849Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975804657Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975834070Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975998842Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.976165653Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.977740780Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.977823231Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.977979310Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.978024001Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.978133741Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.978192781Z" level=info msg="metadata content store policy set" policy=shared
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.979398865Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.979452106Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.979487510Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.979520613Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.979552321Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.979616545Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.979877476Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.979969253Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980006327Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980040846Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980075255Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980114319Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980148760Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980189045Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980223417Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980253164Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980282269Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980310608Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980348289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980386978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980418532Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980449540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980484042Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980514235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980543443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980573293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980609651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980646773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980677054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980706205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980735214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980766272Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980806833Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980838839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980868321Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980965209Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981007924Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981037680Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981066963Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981094655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981124463Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981155319Z" level=info msg="NRI interface is disabled by configuration."
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981325910Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981412041Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981496206Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981538298Z" level=info msg="containerd successfully booted in 0.026518s"
	Sep 27 01:16:25 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:25.961351885Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 27 01:16:25 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:25.971609471Z" level=info msg="Loading containers: start."
	Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.079462380Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.142922131Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.187253380Z" level=warning msg="error locating sandbox id e0ffe3e7a49ab09d3d6f73ed7b97f2135141a0ed4c3e63d2a302fd9ffd181dbb: sandbox e0ffe3e7a49ab09d3d6f73ed7b97f2135141a0ed4c3e63d2a302fd9ffd181dbb not found"
	Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.187440681Z" level=info msg="Loading containers: done."
	Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.195076424Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.195150891Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.195197197Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.195352314Z" level=info msg="Daemon has completed initialization"
	Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.217613628Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.217699368Z" level=info msg="API listen on [::]:2376"
	Sep 27 01:16:26 multinode-108000-m02 systemd[1]: Started Docker Application Container Engine.
	Sep 27 01:16:27 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:27.252125643Z" level=info msg="Processing signal 'terminated'"
	Sep 27 01:16:27 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:27.252968662Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 27 01:16:27 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:27.253242428Z" level=info msg="Daemon shutdown complete"
	Sep 27 01:16:27 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:27.253285728Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 27 01:16:27 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:27.253375422Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 27 01:16:27 multinode-108000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Sep 27 01:16:28 multinode-108000-m02 systemd[1]: docker.service: Deactivated successfully.
	Sep 27 01:16:28 multinode-108000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Sep 27 01:16:28 multinode-108000-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 27 01:16:28 multinode-108000-m02 dockerd[907]: time="2024-09-27T01:16:28.287366515Z" level=info msg="Starting up"
	Sep 27 01:17:28 multinode-108000-m02 dockerd[907]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 27 01:17:28 multinode-108000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 27 01:17:28 multinode-108000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 27 01:17:28 multinode-108000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 27 01:16:24 multinode-108000-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 27 01:16:24 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:24.939272378Z" level=info msg="Starting up"
	Sep 27 01:16:24 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:24.939744281Z" level=info msg="containerd not running, starting managed containerd"
	Sep 27 01:16:24 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:24.940372696Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=496
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.955635497Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975220104Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975290387Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975364574Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975401354Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975543498Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975598213Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975731849Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975772849Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975804657Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975834070Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975998842Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.976165653Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.977740780Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.977823231Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.977979310Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.978024001Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.978133741Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.978192781Z" level=info msg="metadata content store policy set" policy=shared
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.979398865Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.979452106Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.979487510Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.979520613Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.979552321Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.979616545Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.979877476Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.979969253Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980006327Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980040846Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980075255Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980114319Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980148760Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980189045Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980223417Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980253164Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980282269Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980310608Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980348289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980386978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980418532Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980449540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980484042Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980514235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980543443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980573293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980609651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980646773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980677054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980706205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980735214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980766272Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980806833Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980838839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980868321Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980965209Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981007924Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981037680Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981066963Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981094655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981124463Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981155319Z" level=info msg="NRI interface is disabled by configuration."
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981325910Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981412041Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981496206Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981538298Z" level=info msg="containerd successfully booted in 0.026518s"
	Sep 27 01:16:25 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:25.961351885Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 27 01:16:25 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:25.971609471Z" level=info msg="Loading containers: start."
	Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.079462380Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.142922131Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.187253380Z" level=warning msg="error locating sandbox id e0ffe3e7a49ab09d3d6f73ed7b97f2135141a0ed4c3e63d2a302fd9ffd181dbb: sandbox e0ffe3e7a49ab09d3d6f73ed7b97f2135141a0ed4c3e63d2a302fd9ffd181dbb not found"
	Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.187440681Z" level=info msg="Loading containers: done."
	Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.195076424Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.195150891Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.195197197Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.195352314Z" level=info msg="Daemon has completed initialization"
	Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.217613628Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.217699368Z" level=info msg="API listen on [::]:2376"
	Sep 27 01:16:26 multinode-108000-m02 systemd[1]: Started Docker Application Container Engine.
	Sep 27 01:16:27 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:27.252125643Z" level=info msg="Processing signal 'terminated'"
	Sep 27 01:16:27 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:27.252968662Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 27 01:16:27 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:27.253242428Z" level=info msg="Daemon shutdown complete"
	Sep 27 01:16:27 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:27.253285728Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 27 01:16:27 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:27.253375422Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 27 01:16:27 multinode-108000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Sep 27 01:16:28 multinode-108000-m02 systemd[1]: docker.service: Deactivated successfully.
	Sep 27 01:16:28 multinode-108000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Sep 27 01:16:28 multinode-108000-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 27 01:16:28 multinode-108000-m02 dockerd[907]: time="2024-09-27T01:16:28.287366515Z" level=info msg="Starting up"
	Sep 27 01:17:28 multinode-108000-m02 dockerd[907]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 27 01:17:28 multinode-108000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 27 01:17:28 multinode-108000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 27 01:17:28 multinode-108000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0926 18:17:28.253218    5496 out.go:270] * 
	* 
	W0926 18:17:28.254521    5496 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 18:17:28.316658    5496 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-108000 --wait=true -v=8 --alsologtostderr --driver=hyperkit " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-108000 -n multinode-108000
helpers_test.go:244: <<< TestMultiNode/serial/RestartMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-108000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-108000 logs -n 25: (2.690767119s)
helpers_test.go:252: TestMultiNode/serial/RestartMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                                            Args                                                             |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| cp      | multinode-108000 cp multinode-108000-m02:/home/docker/cp-test.txt                                                           | multinode-108000 | jenkins | v1.34.0 | 26 Sep 24 18:10 PDT | 26 Sep 24 18:11 PDT |
	|         | multinode-108000:/home/docker/cp-test_multinode-108000-m02_multinode-108000.txt                                             |                  |         |         |                     |                     |
	| ssh     | multinode-108000 ssh -n                                                                                                     | multinode-108000 | jenkins | v1.34.0 | 26 Sep 24 18:11 PDT | 26 Sep 24 18:11 PDT |
	|         | multinode-108000-m02 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-108000 ssh -n multinode-108000 sudo cat                                                                           | multinode-108000 | jenkins | v1.34.0 | 26 Sep 24 18:11 PDT | 26 Sep 24 18:11 PDT |
	|         | /home/docker/cp-test_multinode-108000-m02_multinode-108000.txt                                                              |                  |         |         |                     |                     |
	| cp      | multinode-108000 cp multinode-108000-m02:/home/docker/cp-test.txt                                                           | multinode-108000 | jenkins | v1.34.0 | 26 Sep 24 18:11 PDT | 26 Sep 24 18:11 PDT |
	|         | multinode-108000-m03:/home/docker/cp-test_multinode-108000-m02_multinode-108000-m03.txt                                     |                  |         |         |                     |                     |
	| ssh     | multinode-108000 ssh -n                                                                                                     | multinode-108000 | jenkins | v1.34.0 | 26 Sep 24 18:11 PDT | 26 Sep 24 18:11 PDT |
	|         | multinode-108000-m02 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-108000 ssh -n multinode-108000-m03 sudo cat                                                                       | multinode-108000 | jenkins | v1.34.0 | 26 Sep 24 18:11 PDT | 26 Sep 24 18:11 PDT |
	|         | /home/docker/cp-test_multinode-108000-m02_multinode-108000-m03.txt                                                          |                  |         |         |                     |                     |
	| cp      | multinode-108000 cp testdata/cp-test.txt                                                                                    | multinode-108000 | jenkins | v1.34.0 | 26 Sep 24 18:11 PDT | 26 Sep 24 18:11 PDT |
	|         | multinode-108000-m03:/home/docker/cp-test.txt                                                                               |                  |         |         |                     |                     |
	| ssh     | multinode-108000 ssh -n                                                                                                     | multinode-108000 | jenkins | v1.34.0 | 26 Sep 24 18:11 PDT | 26 Sep 24 18:11 PDT |
	|         | multinode-108000-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| cp      | multinode-108000 cp multinode-108000-m03:/home/docker/cp-test.txt                                                           | multinode-108000 | jenkins | v1.34.0 | 26 Sep 24 18:11 PDT | 26 Sep 24 18:11 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile1690180015/001/cp-test_multinode-108000-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-108000 ssh -n                                                                                                     | multinode-108000 | jenkins | v1.34.0 | 26 Sep 24 18:11 PDT | 26 Sep 24 18:11 PDT |
	|         | multinode-108000-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| cp      | multinode-108000 cp multinode-108000-m03:/home/docker/cp-test.txt                                                           | multinode-108000 | jenkins | v1.34.0 | 26 Sep 24 18:11 PDT | 26 Sep 24 18:11 PDT |
	|         | multinode-108000:/home/docker/cp-test_multinode-108000-m03_multinode-108000.txt                                             |                  |         |         |                     |                     |
	| ssh     | multinode-108000 ssh -n                                                                                                     | multinode-108000 | jenkins | v1.34.0 | 26 Sep 24 18:11 PDT | 26 Sep 24 18:11 PDT |
	|         | multinode-108000-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-108000 ssh -n multinode-108000 sudo cat                                                                           | multinode-108000 | jenkins | v1.34.0 | 26 Sep 24 18:11 PDT | 26 Sep 24 18:11 PDT |
	|         | /home/docker/cp-test_multinode-108000-m03_multinode-108000.txt                                                              |                  |         |         |                     |                     |
	| cp      | multinode-108000 cp multinode-108000-m03:/home/docker/cp-test.txt                                                           | multinode-108000 | jenkins | v1.34.0 | 26 Sep 24 18:11 PDT | 26 Sep 24 18:11 PDT |
	|         | multinode-108000-m02:/home/docker/cp-test_multinode-108000-m03_multinode-108000-m02.txt                                     |                  |         |         |                     |                     |
	| ssh     | multinode-108000 ssh -n                                                                                                     | multinode-108000 | jenkins | v1.34.0 | 26 Sep 24 18:11 PDT | 26 Sep 24 18:11 PDT |
	|         | multinode-108000-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-108000 ssh -n multinode-108000-m02 sudo cat                                                                       | multinode-108000 | jenkins | v1.34.0 | 26 Sep 24 18:11 PDT | 26 Sep 24 18:11 PDT |
	|         | /home/docker/cp-test_multinode-108000-m03_multinode-108000-m02.txt                                                          |                  |         |         |                     |                     |
	| node    | multinode-108000 node stop m03                                                                                              | multinode-108000 | jenkins | v1.34.0 | 26 Sep 24 18:11 PDT | 26 Sep 24 18:11 PDT |
	| node    | multinode-108000 node start                                                                                                 | multinode-108000 | jenkins | v1.34.0 | 26 Sep 24 18:11 PDT | 26 Sep 24 18:11 PDT |
	|         | m03 -v=7 --alsologtostderr                                                                                                  |                  |         |         |                     |                     |
	| node    | list -p multinode-108000                                                                                                    | multinode-108000 | jenkins | v1.34.0 | 26 Sep 24 18:11 PDT |                     |
	| stop    | -p multinode-108000                                                                                                         | multinode-108000 | jenkins | v1.34.0 | 26 Sep 24 18:11 PDT | 26 Sep 24 18:12 PDT |
	| start   | -p multinode-108000                                                                                                         | multinode-108000 | jenkins | v1.34.0 | 26 Sep 24 18:12 PDT | 26 Sep 24 18:14 PDT |
	|         | --wait=true -v=8                                                                                                            |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                           |                  |         |         |                     |                     |
	| node    | list -p multinode-108000                                                                                                    | multinode-108000 | jenkins | v1.34.0 | 26 Sep 24 18:14 PDT |                     |
	| node    | multinode-108000 node delete                                                                                                | multinode-108000 | jenkins | v1.34.0 | 26 Sep 24 18:14 PDT | 26 Sep 24 18:14 PDT |
	|         | m03                                                                                                                         |                  |         |         |                     |                     |
	| stop    | multinode-108000 stop                                                                                                       | multinode-108000 | jenkins | v1.34.0 | 26 Sep 24 18:14 PDT | 26 Sep 24 18:15 PDT |
	| start   | -p multinode-108000                                                                                                         | multinode-108000 | jenkins | v1.34.0 | 26 Sep 24 18:15 PDT |                     |
	|         | --wait=true -v=8                                                                                                            |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                           |                  |         |         |                     |                     |
	|         | --driver=hyperkit                                                                                                           |                  |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/26 18:15:10
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 18:15:10.750251    5496 out.go:345] Setting OutFile to fd 1 ...
	I0926 18:15:10.750510    5496 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:15:10.750516    5496 out.go:358] Setting ErrFile to fd 2...
	I0926 18:15:10.750520    5496 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:15:10.750705    5496 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1128/.minikube/bin
	I0926 18:15:10.752073    5496 out.go:352] Setting JSON to false
	I0926 18:15:10.775187    5496 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":4480,"bootTime":1727395230,"procs":439,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0926 18:15:10.775336    5496 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 18:15:10.796791    5496 out.go:177] * [multinode-108000] minikube v1.34.0 on Darwin 14.6.1
	I0926 18:15:10.839687    5496 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 18:15:10.839724    5496 notify.go:220] Checking for updates...
	I0926 18:15:10.882369    5496 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1128/kubeconfig
	I0926 18:15:10.903644    5496 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0926 18:15:10.924697    5496 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 18:15:10.945384    5496 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1128/.minikube
	I0926 18:15:10.966653    5496 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 18:15:10.988445    5496 config.go:182] Loaded profile config "multinode-108000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 18:15:10.989143    5496 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 18:15:10.989216    5496 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 18:15:10.998872    5496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53362
	I0926 18:15:10.999243    5496 main.go:141] libmachine: () Calling .GetVersion
	I0926 18:15:10.999629    5496 main.go:141] libmachine: Using API Version  1
	I0926 18:15:10.999639    5496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 18:15:10.999883    5496 main.go:141] libmachine: () Calling .GetMachineName
	I0926 18:15:10.999986    5496 main.go:141] libmachine: (multinode-108000) Calling .DriverName
	I0926 18:15:11.000169    5496 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 18:15:11.000432    5496 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 18:15:11.000459    5496 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 18:15:11.008768    5496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53364
	I0926 18:15:11.009105    5496 main.go:141] libmachine: () Calling .GetVersion
	I0926 18:15:11.009453    5496 main.go:141] libmachine: Using API Version  1
	I0926 18:15:11.009466    5496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 18:15:11.009674    5496 main.go:141] libmachine: () Calling .GetMachineName
	I0926 18:15:11.009812    5496 main.go:141] libmachine: (multinode-108000) Calling .DriverName
	I0926 18:15:11.038448    5496 out.go:177] * Using the hyperkit driver based on existing profile
	I0926 18:15:11.080596    5496 start.go:297] selected driver: hyperkit
	I0926 18:15:11.080625    5496 start.go:901] validating driver "hyperkit" against &{Name:multinode-108000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.31.1 ClusterName:multinode-108000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.15 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logv
iewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 18:15:11.080871    5496 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 18:15:11.081068    5496 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:15:11.081299    5496 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19711-1128/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0926 18:15:11.091103    5496 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0926 18:15:11.094863    5496 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 18:15:11.094881    5496 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0926 18:15:11.097842    5496 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 18:15:11.097884    5496 cni.go:84] Creating CNI manager for ""
	I0926 18:15:11.097930    5496 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0926 18:15:11.098006    5496 start.go:340] cluster config:
	{Name:multinode-108000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-108000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.15 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-d
river-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 18:15:11.098103    5496 iso.go:125] acquiring lock: {Name:mka8a9c5a237c1e4ae233281d2ff7965d13a843d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:15:11.140576    5496 out.go:177] * Starting "multinode-108000" primary control-plane node in "multinode-108000" cluster
	I0926 18:15:11.161702    5496 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 18:15:11.161786    5496 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0926 18:15:11.161813    5496 cache.go:56] Caching tarball of preloaded images
	I0926 18:15:11.162007    5496 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0926 18:15:11.162026    5496 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 18:15:11.162207    5496 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/multinode-108000/config.json ...
	I0926 18:15:11.163098    5496 start.go:360] acquireMachinesLock for multinode-108000: {Name:mk62b0ec9b6170d1971239573141a625c4a2621e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:15:11.163244    5496 start.go:364] duration metric: took 123.219µs to acquireMachinesLock for "multinode-108000"
	I0926 18:15:11.163281    5496 start.go:96] Skipping create...Using existing machine configuration
	I0926 18:15:11.163297    5496 fix.go:54] fixHost starting: 
	I0926 18:15:11.163724    5496 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 18:15:11.163750    5496 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 18:15:11.172811    5496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53366
	I0926 18:15:11.173221    5496 main.go:141] libmachine: () Calling .GetVersion
	I0926 18:15:11.173642    5496 main.go:141] libmachine: Using API Version  1
	I0926 18:15:11.173653    5496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 18:15:11.174043    5496 main.go:141] libmachine: () Calling .GetMachineName
	I0926 18:15:11.174185    5496 main.go:141] libmachine: (multinode-108000) Calling .DriverName
	I0926 18:15:11.174331    5496 main.go:141] libmachine: (multinode-108000) Calling .GetState
	I0926 18:15:11.174419    5496 main.go:141] libmachine: (multinode-108000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:15:11.174522    5496 main.go:141] libmachine: (multinode-108000) DBG | hyperkit pid from json: 5408
	I0926 18:15:11.175429    5496 main.go:141] libmachine: (multinode-108000) DBG | hyperkit pid 5408 missing from process table
	I0926 18:15:11.175457    5496 fix.go:112] recreateIfNeeded on multinode-108000: state=Stopped err=<nil>
	I0926 18:15:11.175474    5496 main.go:141] libmachine: (multinode-108000) Calling .DriverName
	W0926 18:15:11.175568    5496 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 18:15:11.196340    5496 out.go:177] * Restarting existing hyperkit VM for "multinode-108000" ...
	I0926 18:15:11.238414    5496 main.go:141] libmachine: (multinode-108000) Calling .Start
	I0926 18:15:11.238592    5496 main.go:141] libmachine: (multinode-108000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:15:11.238630    5496 main.go:141] libmachine: (multinode-108000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/hyperkit.pid
	I0926 18:15:11.239933    5496 main.go:141] libmachine: (multinode-108000) DBG | hyperkit pid 5408 missing from process table
	I0926 18:15:11.239948    5496 main.go:141] libmachine: (multinode-108000) DBG | pid 5408 is in state "Stopped"
	I0926 18:15:11.239966    5496 main.go:141] libmachine: (multinode-108000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/hyperkit.pid...
	I0926 18:15:11.240316    5496 main.go:141] libmachine: (multinode-108000) DBG | Using UUID 1fff9e18-98b5-4af0-b682-f00d5d335588
	I0926 18:15:11.349220    5496 main.go:141] libmachine: (multinode-108000) DBG | Generated MAC 6e:13:d0:11:59:38
	I0926 18:15:11.349243    5496 main.go:141] libmachine: (multinode-108000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-108000
	I0926 18:15:11.349450    5496 main.go:141] libmachine: (multinode-108000) DBG | 2024/09/26 18:15:11 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1fff9e18-98b5-4af0-b682-f00d5d335588", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aaba0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0926 18:15:11.349490    5496 main.go:141] libmachine: (multinode-108000) DBG | 2024/09/26 18:15:11 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1fff9e18-98b5-4af0-b682-f00d5d335588", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aaba0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0926 18:15:11.349527    5496 main.go:141] libmachine: (multinode-108000) DBG | 2024/09/26 18:15:11 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "1fff9e18-98b5-4af0-b682-f00d5d335588", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/multinode-108000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/bzimage,/Users/jenkins/minikube-integration/1971
1-1128/.minikube/machines/multinode-108000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-108000"}
	I0926 18:15:11.349580    5496 main.go:141] libmachine: (multinode-108000) DBG | 2024/09/26 18:15:11 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 1fff9e18-98b5-4af0-b682-f00d5d335588 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/multinode-108000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/console-ring -f kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/bzimage,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-108000"
	I0926 18:15:11.349593    5496 main.go:141] libmachine: (multinode-108000) DBG | 2024/09/26 18:15:11 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0926 18:15:11.351042    5496 main.go:141] libmachine: (multinode-108000) DBG | 2024/09/26 18:15:11 DEBUG: hyperkit: Pid is 5510
	I0926 18:15:11.351416    5496 main.go:141] libmachine: (multinode-108000) DBG | Attempt 0
	I0926 18:15:11.351429    5496 main.go:141] libmachine: (multinode-108000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:15:11.351501    5496 main.go:141] libmachine: (multinode-108000) DBG | hyperkit pid from json: 5510
	I0926 18:15:11.353311    5496 main.go:141] libmachine: (multinode-108000) DBG | Searching for 6e:13:d0:11:59:38 in /var/db/dhcpd_leases ...
	I0926 18:15:11.353378    5496 main.go:141] libmachine: (multinode-108000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0926 18:15:11.353405    5496 main.go:141] libmachine: (multinode-108000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:15:11.353419    5496 main.go:141] libmachine: (multinode-108000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f75815}
	I0926 18:15:11.353427    5496 main.go:141] libmachine: (multinode-108000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f757ea}
	I0926 18:15:11.353431    5496 main.go:141] libmachine: (multinode-108000) DBG | Found match: 6e:13:d0:11:59:38
	I0926 18:15:11.353456    5496 main.go:141] libmachine: (multinode-108000) DBG | IP: 192.169.0.14
	I0926 18:15:11.353470    5496 main.go:141] libmachine: (multinode-108000) Calling .GetConfigRaw
	I0926 18:15:11.354165    5496 main.go:141] libmachine: (multinode-108000) Calling .GetIP
	I0926 18:15:11.354362    5496 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/multinode-108000/config.json ...
	I0926 18:15:11.354951    5496 machine.go:93] provisionDockerMachine start ...
	I0926 18:15:11.354961    5496 main.go:141] libmachine: (multinode-108000) Calling .DriverName
	I0926 18:15:11.355075    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHHostname
	I0926 18:15:11.355184    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHPort
	I0926 18:15:11.355302    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHKeyPath
	I0926 18:15:11.355440    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHKeyPath
	I0926 18:15:11.355538    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHUsername
	I0926 18:15:11.355681    5496 main.go:141] libmachine: Using SSH client type: native
	I0926 18:15:11.355867    5496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc5b7d00] 0xc5ba9e0 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0926 18:15:11.355875    5496 main.go:141] libmachine: About to run SSH command:
	hostname
	I0926 18:15:11.359076    5496 main.go:141] libmachine: (multinode-108000) DBG | 2024/09/26 18:15:11 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I0926 18:15:11.410801    5496 main.go:141] libmachine: (multinode-108000) DBG | 2024/09/26 18:15:11 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0926 18:15:11.411497    5496 main.go:141] libmachine: (multinode-108000) DBG | 2024/09/26 18:15:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 18:15:11.411508    5496 main.go:141] libmachine: (multinode-108000) DBG | 2024/09/26 18:15:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 18:15:11.411517    5496 main.go:141] libmachine: (multinode-108000) DBG | 2024/09/26 18:15:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 18:15:11.411525    5496 main.go:141] libmachine: (multinode-108000) DBG | 2024/09/26 18:15:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 18:15:11.796734    5496 main.go:141] libmachine: (multinode-108000) DBG | 2024/09/26 18:15:11 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0926 18:15:11.796747    5496 main.go:141] libmachine: (multinode-108000) DBG | 2024/09/26 18:15:11 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0926 18:15:11.911687    5496 main.go:141] libmachine: (multinode-108000) DBG | 2024/09/26 18:15:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 18:15:11.911703    5496 main.go:141] libmachine: (multinode-108000) DBG | 2024/09/26 18:15:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 18:15:11.911711    5496 main.go:141] libmachine: (multinode-108000) DBG | 2024/09/26 18:15:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 18:15:11.911716    5496 main.go:141] libmachine: (multinode-108000) DBG | 2024/09/26 18:15:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 18:15:11.912526    5496 main.go:141] libmachine: (multinode-108000) DBG | 2024/09/26 18:15:11 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0926 18:15:11.912534    5496 main.go:141] libmachine: (multinode-108000) DBG | 2024/09/26 18:15:11 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0926 18:15:17.511540    5496 main.go:141] libmachine: (multinode-108000) DBG | 2024/09/26 18:15:17 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0926 18:15:17.511579    5496 main.go:141] libmachine: (multinode-108000) DBG | 2024/09/26 18:15:17 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0926 18:15:17.511589    5496 main.go:141] libmachine: (multinode-108000) DBG | 2024/09/26 18:15:17 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0926 18:15:17.536373    5496 main.go:141] libmachine: (multinode-108000) DBG | 2024/09/26 18:15:17 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0926 18:15:22.427366    5496 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0926 18:15:22.427383    5496 main.go:141] libmachine: (multinode-108000) Calling .GetMachineName
	I0926 18:15:22.427531    5496 buildroot.go:166] provisioning hostname "multinode-108000"
	I0926 18:15:22.427543    5496 main.go:141] libmachine: (multinode-108000) Calling .GetMachineName
	I0926 18:15:22.427644    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHHostname
	I0926 18:15:22.427741    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHPort
	I0926 18:15:22.427847    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHKeyPath
	I0926 18:15:22.427947    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHKeyPath
	I0926 18:15:22.428065    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHUsername
	I0926 18:15:22.428207    5496 main.go:141] libmachine: Using SSH client type: native
	I0926 18:15:22.428344    5496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc5b7d00] 0xc5ba9e0 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0926 18:15:22.428351    5496 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-108000 && echo "multinode-108000" | sudo tee /etc/hostname
	I0926 18:15:22.502850    5496 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-108000
	
	I0926 18:15:22.502870    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHHostname
	I0926 18:15:22.503007    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHPort
	I0926 18:15:22.503129    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHKeyPath
	I0926 18:15:22.503213    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHKeyPath
	I0926 18:15:22.503295    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHUsername
	I0926 18:15:22.503420    5496 main.go:141] libmachine: Using SSH client type: native
	I0926 18:15:22.503564    5496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc5b7d00] 0xc5ba9e0 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0926 18:15:22.503575    5496 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-108000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-108000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-108000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0926 18:15:22.575924    5496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 18:15:22.575945    5496 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19711-1128/.minikube CaCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19711-1128/.minikube}
	I0926 18:15:22.575961    5496 buildroot.go:174] setting up certificates
	I0926 18:15:22.575967    5496 provision.go:84] configureAuth start
	I0926 18:15:22.575973    5496 main.go:141] libmachine: (multinode-108000) Calling .GetMachineName
	I0926 18:15:22.576112    5496 main.go:141] libmachine: (multinode-108000) Calling .GetIP
	I0926 18:15:22.576208    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHHostname
	I0926 18:15:22.576306    5496 provision.go:143] copyHostCerts
	I0926 18:15:22.576335    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem
	I0926 18:15:22.576404    5496 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem, removing ...
	I0926 18:15:22.576412    5496 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem
	I0926 18:15:22.576543    5496 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem (1082 bytes)
	I0926 18:15:22.576756    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem
	I0926 18:15:22.576795    5496 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem, removing ...
	I0926 18:15:22.576800    5496 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem
	I0926 18:15:22.576876    5496 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem (1123 bytes)
	I0926 18:15:22.577008    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem
	I0926 18:15:22.577045    5496 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem, removing ...
	I0926 18:15:22.577050    5496 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem
	I0926 18:15:22.577123    5496 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem (1675 bytes)
	I0926 18:15:22.577269    5496 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem org=jenkins.multinode-108000 san=[127.0.0.1 192.169.0.14 localhost minikube multinode-108000]
	I0926 18:15:22.652306    5496 provision.go:177] copyRemoteCerts
	I0926 18:15:22.652366    5496 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0926 18:15:22.652379    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHHostname
	I0926 18:15:22.652514    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHPort
	I0926 18:15:22.652639    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHKeyPath
	I0926 18:15:22.652743    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHUsername
	I0926 18:15:22.652838    5496 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/id_rsa Username:docker}
	I0926 18:15:22.692386    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0926 18:15:22.692453    5496 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0926 18:15:22.712471    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0926 18:15:22.712531    5496 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0926 18:15:22.732130    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0926 18:15:22.732186    5496 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0926 18:15:22.752204    5496 provision.go:87] duration metric: took 176.224795ms to configureAuth
	I0926 18:15:22.752216    5496 buildroot.go:189] setting minikube options for container-runtime
	I0926 18:15:22.752378    5496 config.go:182] Loaded profile config "multinode-108000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 18:15:22.752391    5496 main.go:141] libmachine: (multinode-108000) Calling .DriverName
	I0926 18:15:22.752518    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHHostname
	I0926 18:15:22.752598    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHPort
	I0926 18:15:22.752698    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHKeyPath
	I0926 18:15:22.752797    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHKeyPath
	I0926 18:15:22.752883    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHUsername
	I0926 18:15:22.753007    5496 main.go:141] libmachine: Using SSH client type: native
	I0926 18:15:22.753131    5496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc5b7d00] 0xc5ba9e0 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0926 18:15:22.753138    5496 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0926 18:15:22.818711    5496 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0926 18:15:22.818728    5496 buildroot.go:70] root file system type: tmpfs
	I0926 18:15:22.818806    5496 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0926 18:15:22.818821    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHHostname
	I0926 18:15:22.818962    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHPort
	I0926 18:15:22.819053    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHKeyPath
	I0926 18:15:22.819147    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHKeyPath
	I0926 18:15:22.819233    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHUsername
	I0926 18:15:22.819375    5496 main.go:141] libmachine: Using SSH client type: native
	I0926 18:15:22.819517    5496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc5b7d00] 0xc5ba9e0 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0926 18:15:22.819562    5496 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0926 18:15:22.895883    5496 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0926 18:15:22.895903    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHHostname
	I0926 18:15:22.896045    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHPort
	I0926 18:15:22.896139    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHKeyPath
	I0926 18:15:22.896225    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHKeyPath
	I0926 18:15:22.896304    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHUsername
	I0926 18:15:22.896467    5496 main.go:141] libmachine: Using SSH client type: native
	I0926 18:15:22.896608    5496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc5b7d00] 0xc5ba9e0 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0926 18:15:22.896620    5496 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0926 18:15:24.581382    5496 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0926 18:15:24.581401    5496 machine.go:96] duration metric: took 13.226381415s to provisionDockerMachine
	I0926 18:15:24.581411    5496 start.go:293] postStartSetup for "multinode-108000" (driver="hyperkit")
	I0926 18:15:24.581419    5496 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0926 18:15:24.581432    5496 main.go:141] libmachine: (multinode-108000) Calling .DriverName
	I0926 18:15:24.581622    5496 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0926 18:15:24.581635    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHHostname
	I0926 18:15:24.581740    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHPort
	I0926 18:15:24.581842    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHKeyPath
	I0926 18:15:24.581927    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHUsername
	I0926 18:15:24.582073    5496 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/id_rsa Username:docker}
	I0926 18:15:24.625069    5496 ssh_runner.go:195] Run: cat /etc/os-release
	I0926 18:15:24.628593    5496 command_runner.go:130] > NAME=Buildroot
	I0926 18:15:24.628608    5496 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0926 18:15:24.628612    5496 command_runner.go:130] > ID=buildroot
	I0926 18:15:24.628616    5496 command_runner.go:130] > VERSION_ID=2023.02.9
	I0926 18:15:24.628620    5496 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0926 18:15:24.628818    5496 info.go:137] Remote host: Buildroot 2023.02.9
	I0926 18:15:24.628829    5496 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1128/.minikube/addons for local assets ...
	I0926 18:15:24.628924    5496 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1128/.minikube/files for local assets ...
	I0926 18:15:24.629110    5496 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> 16792.pem in /etc/ssl/certs
	I0926 18:15:24.629117    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> /etc/ssl/certs/16792.pem
	I0926 18:15:24.629337    5496 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0926 18:15:24.639059    5496 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem --> /etc/ssl/certs/16792.pem (1708 bytes)
	I0926 18:15:24.673630    5496 start.go:296] duration metric: took 92.20968ms for postStartSetup
	I0926 18:15:24.673656    5496 fix.go:56] duration metric: took 13.510304615s for fixHost
	I0926 18:15:24.673670    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHHostname
	I0926 18:15:24.673801    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHPort
	I0926 18:15:24.673893    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHKeyPath
	I0926 18:15:24.673989    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHKeyPath
	I0926 18:15:24.674075    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHUsername
	I0926 18:15:24.674222    5496 main.go:141] libmachine: Using SSH client type: native
	I0926 18:15:24.674353    5496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc5b7d00] 0xc5ba9e0 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0926 18:15:24.674360    5496 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0926 18:15:24.738613    5496 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727399724.859428913
	
	I0926 18:15:24.738631    5496 fix.go:216] guest clock: 1727399724.859428913
	I0926 18:15:24.738636    5496 fix.go:229] Guest: 2024-09-26 18:15:24.859428913 -0700 PDT Remote: 2024-09-26 18:15:24.67366 -0700 PDT m=+13.959588443 (delta=185.768913ms)
	I0926 18:15:24.738657    5496 fix.go:200] guest clock delta is within tolerance: 185.768913ms
	I0926 18:15:24.738661    5496 start.go:83] releasing machines lock for "multinode-108000", held for 13.575343927s
	I0926 18:15:24.738678    5496 main.go:141] libmachine: (multinode-108000) Calling .DriverName
	I0926 18:15:24.738818    5496 main.go:141] libmachine: (multinode-108000) Calling .GetIP
	I0926 18:15:24.738930    5496 main.go:141] libmachine: (multinode-108000) Calling .DriverName
	I0926 18:15:24.739260    5496 main.go:141] libmachine: (multinode-108000) Calling .DriverName
	I0926 18:15:24.739368    5496 main.go:141] libmachine: (multinode-108000) Calling .DriverName
	I0926 18:15:24.739450    5496 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0926 18:15:24.739483    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHHostname
	I0926 18:15:24.739528    5496 ssh_runner.go:195] Run: cat /version.json
	I0926 18:15:24.739538    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHHostname
	I0926 18:15:24.739590    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHPort
	I0926 18:15:24.739641    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHPort
	I0926 18:15:24.739666    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHKeyPath
	I0926 18:15:24.739718    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHKeyPath
	I0926 18:15:24.739750    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHUsername
	I0926 18:15:24.739806    5496 main.go:141] libmachine: (multinode-108000) Calling .GetSSHUsername
	I0926 18:15:24.739829    5496 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/id_rsa Username:docker}
	I0926 18:15:24.739890    5496 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/id_rsa Username:docker}
	I0926 18:15:24.815971    5496 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0926 18:15:24.816911    5496 command_runner.go:130] > {"iso_version": "v1.34.0-1727108440-19696", "kicbase_version": "v0.0.45-1726784731-19672", "minikube_version": "v1.34.0", "commit": "09d18ff16db81cf1cb24cd6e95f197b54c5f843c"}
	I0926 18:15:24.817118    5496 ssh_runner.go:195] Run: systemctl --version
	I0926 18:15:24.822017    5496 command_runner.go:130] > systemd 252 (252)
	I0926 18:15:24.822046    5496 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0926 18:15:24.822149    5496 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0926 18:15:24.826294    5496 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0926 18:15:24.826317    5496 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0926 18:15:24.826360    5496 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0926 18:15:24.838874    5496 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0926 18:15:24.839164    5496 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0926 18:15:24.839174    5496 start.go:495] detecting cgroup driver to use...
	I0926 18:15:24.839283    5496 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 18:15:24.854233    5496 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0926 18:15:24.854488    5496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0926 18:15:24.862837    5496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0926 18:15:24.871133    5496 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0926 18:15:24.871187    5496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0926 18:15:24.879395    5496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 18:15:24.887784    5496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0926 18:15:24.895839    5496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 18:15:24.904195    5496 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0926 18:15:24.912648    5496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0926 18:15:24.920954    5496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0926 18:15:24.929204    5496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0926 18:15:24.937448    5496 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0926 18:15:24.944910    5496 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0926 18:15:24.944934    5496 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0926 18:15:24.944973    5496 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0926 18:15:24.953505    5496 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0926 18:15:24.961687    5496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 18:15:25.073722    5496 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0926 18:15:25.092487    5496 start.go:495] detecting cgroup driver to use...
	I0926 18:15:25.092582    5496 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0926 18:15:25.107069    5496 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0926 18:15:25.107082    5496 command_runner.go:130] > [Unit]
	I0926 18:15:25.107087    5496 command_runner.go:130] > Description=Docker Application Container Engine
	I0926 18:15:25.107091    5496 command_runner.go:130] > Documentation=https://docs.docker.com
	I0926 18:15:25.107095    5496 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0926 18:15:25.107099    5496 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0926 18:15:25.107103    5496 command_runner.go:130] > StartLimitBurst=3
	I0926 18:15:25.107107    5496 command_runner.go:130] > StartLimitIntervalSec=60
	I0926 18:15:25.107110    5496 command_runner.go:130] > [Service]
	I0926 18:15:25.107114    5496 command_runner.go:130] > Type=notify
	I0926 18:15:25.107118    5496 command_runner.go:130] > Restart=on-failure
	I0926 18:15:25.107124    5496 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0926 18:15:25.107143    5496 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0926 18:15:25.107149    5496 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0926 18:15:25.107155    5496 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0926 18:15:25.107162    5496 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0926 18:15:25.107169    5496 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0926 18:15:25.107176    5496 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0926 18:15:25.107184    5496 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0926 18:15:25.107190    5496 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0926 18:15:25.107193    5496 command_runner.go:130] > ExecStart=
	I0926 18:15:25.107210    5496 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0926 18:15:25.107216    5496 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0926 18:15:25.107221    5496 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0926 18:15:25.107226    5496 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0926 18:15:25.107230    5496 command_runner.go:130] > LimitNOFILE=infinity
	I0926 18:15:25.107233    5496 command_runner.go:130] > LimitNPROC=infinity
	I0926 18:15:25.107237    5496 command_runner.go:130] > LimitCORE=infinity
	I0926 18:15:25.107241    5496 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0926 18:15:25.107246    5496 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0926 18:15:25.107249    5496 command_runner.go:130] > TasksMax=infinity
	I0926 18:15:25.107253    5496 command_runner.go:130] > TimeoutStartSec=0
	I0926 18:15:25.107259    5496 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0926 18:15:25.107263    5496 command_runner.go:130] > Delegate=yes
	I0926 18:15:25.107268    5496 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0926 18:15:25.107272    5496 command_runner.go:130] > KillMode=process
	I0926 18:15:25.107277    5496 command_runner.go:130] > [Install]
	I0926 18:15:25.107287    5496 command_runner.go:130] > WantedBy=multi-user.target
	I0926 18:15:25.107365    5496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 18:15:25.120217    5496 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0926 18:15:25.133987    5496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 18:15:25.144928    5496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 18:15:25.155787    5496 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0926 18:15:25.176283    5496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 18:15:25.187021    5496 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 18:15:25.201439    5496 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0926 18:15:25.201678    5496 ssh_runner.go:195] Run: which cri-dockerd
	I0926 18:15:25.204469    5496 command_runner.go:130] > /usr/bin/cri-dockerd
	I0926 18:15:25.204594    5496 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0926 18:15:25.211668    5496 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0926 18:15:25.225438    5496 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0926 18:15:25.328498    5496 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0926 18:15:25.435484    5496 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0926 18:15:25.435549    5496 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0926 18:15:25.449569    5496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 18:15:25.550403    5496 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0926 18:15:27.893151    5496 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.342719676s)
	I0926 18:15:27.893221    5496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0926 18:15:27.905045    5496 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0926 18:15:27.918823    5496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0926 18:15:27.929932    5496 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0926 18:15:28.032246    5496 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0926 18:15:28.137978    5496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 18:15:28.251312    5496 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0926 18:15:28.264994    5496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0926 18:15:28.275886    5496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 18:15:28.366478    5496 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0926 18:15:28.423109    5496 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0926 18:15:28.423205    5496 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0926 18:15:28.427642    5496 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0926 18:15:28.427654    5496 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0926 18:15:28.427658    5496 command_runner.go:130] > Device: 0,22	Inode: 762         Links: 1
	I0926 18:15:28.427664    5496 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0926 18:15:28.427668    5496 command_runner.go:130] > Access: 2024-09-27 01:15:28.500999470 +0000
	I0926 18:15:28.427672    5496 command_runner.go:130] > Modify: 2024-09-27 01:15:28.500999470 +0000
	I0926 18:15:28.427677    5496 command_runner.go:130] > Change: 2024-09-27 01:15:28.502999351 +0000
	I0926 18:15:28.427680    5496 command_runner.go:130] >  Birth: -
	I0926 18:15:28.427896    5496 start.go:563] Will wait 60s for crictl version
	I0926 18:15:28.427954    5496 ssh_runner.go:195] Run: which crictl
	I0926 18:15:28.431063    5496 command_runner.go:130] > /usr/bin/crictl
	I0926 18:15:28.431194    5496 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0926 18:15:28.458785    5496 command_runner.go:130] > Version:  0.1.0
	I0926 18:15:28.458798    5496 command_runner.go:130] > RuntimeName:  docker
	I0926 18:15:28.458802    5496 command_runner.go:130] > RuntimeVersion:  27.3.1
	I0926 18:15:28.458807    5496 command_runner.go:130] > RuntimeApiVersion:  v1
	I0926 18:15:28.459669    5496 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I0926 18:15:28.459770    5496 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0926 18:15:28.475658    5496 command_runner.go:130] > 27.3.1
	I0926 18:15:28.475784    5496 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0926 18:15:28.492767    5496 command_runner.go:130] > 27.3.1
	I0926 18:15:28.537040    5496 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I0926 18:15:28.537086    5496 main.go:141] libmachine: (multinode-108000) Calling .GetIP
	I0926 18:15:28.537491    5496 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0926 18:15:28.541787    5496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 18:15:28.551380    5496 kubeadm.go:883] updating cluster {Name:multinode-108000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.31.1 ClusterName:multinode-108000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.15 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb
:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0926 18:15:28.551476    5496 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 18:15:28.551556    5496 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0926 18:15:28.563909    5496 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.1
	I0926 18:15:28.563923    5496 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.1
	I0926 18:15:28.563927    5496 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.1
	I0926 18:15:28.563931    5496 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.1
	I0926 18:15:28.563935    5496 command_runner.go:130] > kindest/kindnetd:v20240813-c6f155d6
	I0926 18:15:28.563939    5496 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0926 18:15:28.563955    5496 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I0926 18:15:28.563959    5496 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0926 18:15:28.563968    5496 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 18:15:28.563972    5496 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0926 18:15:28.564481    5496 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0926 18:15:28.564490    5496 docker.go:615] Images already preloaded, skipping extraction
	I0926 18:15:28.564573    5496 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0926 18:15:28.575924    5496 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.1
	I0926 18:15:28.575939    5496 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.1
	I0926 18:15:28.575943    5496 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.1
	I0926 18:15:28.575947    5496 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.1
	I0926 18:15:28.575951    5496 command_runner.go:130] > kindest/kindnetd:v20240813-c6f155d6
	I0926 18:15:28.575954    5496 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0926 18:15:28.575960    5496 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I0926 18:15:28.575965    5496 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0926 18:15:28.575969    5496 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 18:15:28.575973    5496 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0926 18:15:28.576659    5496 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0926 18:15:28.576678    5496 cache_images.go:84] Images are preloaded, skipping loading
	I0926 18:15:28.576688    5496 kubeadm.go:934] updating node { 192.169.0.14 8443 v1.31.1 docker true true} ...
	I0926 18:15:28.576773    5496 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-108000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.14
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-108000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0926 18:15:28.576856    5496 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0926 18:15:28.611238    5496 command_runner.go:130] > cgroupfs
	I0926 18:15:28.611772    5496 cni.go:84] Creating CNI manager for ""
	I0926 18:15:28.611782    5496 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0926 18:15:28.611793    5496 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0926 18:15:28.611808    5496 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.14 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-108000 NodeName:multinode-108000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.14"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.14 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0926 18:15:28.611887    5496 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.14
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-108000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.14
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.14"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0926 18:15:28.611968    5496 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0926 18:15:28.619541    5496 command_runner.go:130] > kubeadm
	I0926 18:15:28.619547    5496 command_runner.go:130] > kubectl
	I0926 18:15:28.619550    5496 command_runner.go:130] > kubelet
	I0926 18:15:28.619657    5496 binaries.go:44] Found k8s binaries, skipping transfer
	I0926 18:15:28.619706    5496 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0926 18:15:28.626911    5496 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0926 18:15:28.640341    5496 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0926 18:15:28.653661    5496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0926 18:15:28.667133    5496 ssh_runner.go:195] Run: grep 192.169.0.14	control-plane.minikube.internal$ /etc/hosts
	I0926 18:15:28.670052    5496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.14	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 18:15:28.679370    5496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 18:15:28.776594    5496 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 18:15:28.791198    5496 certs.go:68] Setting up /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/multinode-108000 for IP: 192.169.0.14
	I0926 18:15:28.791211    5496 certs.go:194] generating shared ca certs ...
	I0926 18:15:28.791222    5496 certs.go:226] acquiring lock for ca certs: {Name:mk3d4665cb5756ae49ea6f7cd2a58d273aecc507 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 18:15:28.791411    5496 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.key
	I0926 18:15:28.791491    5496 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.key
	I0926 18:15:28.791502    5496 certs.go:256] generating profile certs ...
	I0926 18:15:28.791596    5496 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/multinode-108000/client.key
	I0926 18:15:28.791675    5496 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/multinode-108000/apiserver.key.1450c8f5
	I0926 18:15:28.791743    5496 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/multinode-108000/proxy-client.key
	I0926 18:15:28.791750    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0926 18:15:28.791771    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0926 18:15:28.791788    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0926 18:15:28.791805    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0926 18:15:28.791824    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/multinode-108000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0926 18:15:28.791851    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/multinode-108000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0926 18:15:28.791887    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/multinode-108000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0926 18:15:28.791906    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/multinode-108000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0926 18:15:28.792003    5496 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679.pem (1338 bytes)
	W0926 18:15:28.792051    5496 certs.go:480] ignoring /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679_empty.pem, impossibly tiny 0 bytes
	I0926 18:15:28.792065    5496 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem (1675 bytes)
	I0926 18:15:28.792095    5496 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem (1082 bytes)
	I0926 18:15:28.792128    5496 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem (1123 bytes)
	I0926 18:15:28.792160    5496 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem (1675 bytes)
	I0926 18:15:28.792231    5496 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem (1708 bytes)
	I0926 18:15:28.792268    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679.pem -> /usr/share/ca-certificates/1679.pem
	I0926 18:15:28.792294    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> /usr/share/ca-certificates/16792.pem
	I0926 18:15:28.792313    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0926 18:15:28.792769    5496 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0926 18:15:28.824491    5496 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0926 18:15:28.858920    5496 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0926 18:15:28.884029    5496 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0926 18:15:28.907578    5496 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/multinode-108000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0926 18:15:28.927328    5496 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/multinode-108000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0926 18:15:28.947177    5496 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/multinode-108000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0926 18:15:28.967268    5496 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/multinode-108000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0926 18:15:28.987093    5496 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/1679.pem --> /usr/share/ca-certificates/1679.pem (1338 bytes)
	I0926 18:15:29.007110    5496 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem --> /usr/share/ca-certificates/16792.pem (1708 bytes)
	I0926 18:15:29.026731    5496 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0926 18:15:29.046322    5496 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0926 18:15:29.059808    5496 ssh_runner.go:195] Run: openssl version
	I0926 18:15:29.063793    5496 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0926 18:15:29.063977    5496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0926 18:15:29.072344    5496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0926 18:15:29.075691    5496 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 27 00:14 /usr/share/ca-certificates/minikubeCA.pem
	I0926 18:15:29.075791    5496 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:14 /usr/share/ca-certificates/minikubeCA.pem
	I0926 18:15:29.075833    5496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0926 18:15:29.079940    5496 command_runner.go:130] > b5213941
	I0926 18:15:29.080071    5496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0926 18:15:29.088331    5496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1679.pem && ln -fs /usr/share/ca-certificates/1679.pem /etc/ssl/certs/1679.pem"
	I0926 18:15:29.096643    5496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1679.pem
	I0926 18:15:29.099914    5496 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 27 00:36 /usr/share/ca-certificates/1679.pem
	I0926 18:15:29.100043    5496 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:36 /usr/share/ca-certificates/1679.pem
	I0926 18:15:29.100083    5496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1679.pem
	I0926 18:15:29.104211    5496 command_runner.go:130] > 51391683
	I0926 18:15:29.104328    5496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1679.pem /etc/ssl/certs/51391683.0"
	I0926 18:15:29.112463    5496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16792.pem && ln -fs /usr/share/ca-certificates/16792.pem /etc/ssl/certs/16792.pem"
	I0926 18:15:29.120800    5496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16792.pem
	I0926 18:15:29.124144    5496 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 27 00:36 /usr/share/ca-certificates/16792.pem
	I0926 18:15:29.124244    5496 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:36 /usr/share/ca-certificates/16792.pem
	I0926 18:15:29.124298    5496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16792.pem
	I0926 18:15:29.128361    5496 command_runner.go:130] > 3ec20f2e
	I0926 18:15:29.128595    5496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16792.pem /etc/ssl/certs/3ec20f2e.0"
	I0926 18:15:29.136915    5496 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0926 18:15:29.140253    5496 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0926 18:15:29.140267    5496 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0926 18:15:29.140272    5496 command_runner.go:130] > Device: 253,1	Inode: 529437      Links: 1
	I0926 18:15:29.140277    5496 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0926 18:15:29.140283    5496 command_runner.go:130] > Access: 2024-09-27 01:12:19.505817222 +0000
	I0926 18:15:29.140287    5496 command_runner.go:130] > Modify: 2024-09-27 01:08:44.822156699 +0000
	I0926 18:15:29.140295    5496 command_runner.go:130] > Change: 2024-09-27 01:08:44.822156699 +0000
	I0926 18:15:29.140301    5496 command_runner.go:130] >  Birth: 2024-09-27 01:08:44.822156699 +0000
	I0926 18:15:29.140414    5496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0926 18:15:29.144643    5496 command_runner.go:130] > Certificate will not expire
	I0926 18:15:29.144777    5496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0926 18:15:29.148962    5496 command_runner.go:130] > Certificate will not expire
	I0926 18:15:29.149056    5496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0926 18:15:29.153170    5496 command_runner.go:130] > Certificate will not expire
	I0926 18:15:29.153336    5496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0926 18:15:29.157522    5496 command_runner.go:130] > Certificate will not expire
	I0926 18:15:29.157678    5496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0926 18:15:29.161829    5496 command_runner.go:130] > Certificate will not expire
	I0926 18:15:29.161978    5496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0926 18:15:29.166239    5496 command_runner.go:130] > Certificate will not expire
	I0926 18:15:29.166368    5496 kubeadm.go:392] StartCluster: {Name:multinode-108000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:multinode-108000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.15 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:fa
lse metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAge
ntPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 18:15:29.166498    5496 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0926 18:15:29.181724    5496 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0926 18:15:29.189178    5496 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0926 18:15:29.189188    5496 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0926 18:15:29.189195    5496 command_runner.go:130] > /var/lib/minikube/etcd:
	I0926 18:15:29.189199    5496 command_runner.go:130] > member
	I0926 18:15:29.189263    5496 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0926 18:15:29.189272    5496 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0926 18:15:29.189316    5496 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0926 18:15:29.196536    5496 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0926 18:15:29.196843    5496 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-108000" does not appear in /Users/jenkins/minikube-integration/19711-1128/kubeconfig
	I0926 18:15:29.196935    5496 kubeconfig.go:62] /Users/jenkins/minikube-integration/19711-1128/kubeconfig needs updating (will repair): [kubeconfig missing "multinode-108000" cluster setting kubeconfig missing "multinode-108000" context setting]
	I0926 18:15:29.197123    5496 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1128/kubeconfig: {Name:mkb9c069c578c490d8cb734b05a21821e9215482 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 18:15:29.197689    5496 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19711-1128/kubeconfig
	I0926 18:15:29.197890    5496 kapi.go:59] client config for multinode-108000: &rest.Config{Host:"https://192.169.0.14:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/multinode-108000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/multinode-108000/client.key", CAFile:"/Users/jenkins/minikube-integration/19711-1128/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xdc8df00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0926 18:15:29.198213    5496 cert_rotation.go:140] Starting client certificate rotation controller
	I0926 18:15:29.198396    5496 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0926 18:15:29.205621    5496 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.14
	I0926 18:15:29.205636    5496 kubeadm.go:1160] stopping kube-system containers ...
	I0926 18:15:29.205706    5496 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0926 18:15:29.222337    5496 command_runner.go:130] > c5d1e02f3410
	I0926 18:15:29.222349    5496 command_runner.go:130] > 264e74b184f3
	I0926 18:15:29.222353    5496 command_runner.go:130] > ae6756186a89
	I0926 18:15:29.222357    5496 command_runner.go:130] > aa5128e84e3c
	I0926 18:15:29.222360    5496 command_runner.go:130] > 24f91ce476a0
	I0926 18:15:29.222364    5496 command_runner.go:130] > 67dac98df54b
	I0926 18:15:29.222376    5496 command_runner.go:130] > d28db07575ac
	I0926 18:15:29.222380    5496 command_runner.go:130] > 9aa764225ca3
	I0926 18:15:29.222384    5496 command_runner.go:130] > 6c14e4e50817
	I0926 18:15:29.222387    5496 command_runner.go:130] > 0b00cd940822
	I0926 18:15:29.222390    5496 command_runner.go:130] > 96b13fc13d92
	I0926 18:15:29.222394    5496 command_runner.go:130] > e8c9a9508a99
	I0926 18:15:29.222397    5496 command_runner.go:130] > e8ecb49c95ed
	I0926 18:15:29.222424    5496 command_runner.go:130] > 0e2ed0aa0566
	I0926 18:15:29.222431    5496 command_runner.go:130] > 0d2737b4b446
	I0926 18:15:29.222435    5496 command_runner.go:130] > e4d5b4323b94
	I0926 18:15:29.222438    5496 command_runner.go:130] > 700ba38f29cd
	I0926 18:15:29.222441    5496 command_runner.go:130] > 1f9a87a7d94b
	I0926 18:15:29.222446    5496 command_runner.go:130] > bd18faf8df7e
	I0926 18:15:29.222449    5496 command_runner.go:130] > 819f06ad9f8f
	I0926 18:15:29.222452    5496 command_runner.go:130] > 7e18c6962c7e
	I0926 18:15:29.222456    5496 command_runner.go:130] > 1405f38eef7c
	I0926 18:15:29.222459    5496 command_runner.go:130] > 0bab0a59e548
	I0926 18:15:29.222462    5496 command_runner.go:130] > 5fe6f666077c
	I0926 18:15:29.222465    5496 command_runner.go:130] > 51a6a22182a5
	I0926 18:15:29.222476    5496 command_runner.go:130] > 9b970bc21b00
	I0926 18:15:29.222480    5496 command_runner.go:130] > 73d594bc25b2
	I0926 18:15:29.222484    5496 command_runner.go:130] > dab704818c00
	I0926 18:15:29.222487    5496 command_runner.go:130] > 63266cd7525c
	I0926 18:15:29.222491    5496 command_runner.go:130] > a111425be00e
	I0926 18:15:29.222493    5496 command_runner.go:130] > 61ef59d75417
	I0926 18:15:29.222513    5496 docker.go:483] Stopping containers: [c5d1e02f3410 264e74b184f3 ae6756186a89 aa5128e84e3c 24f91ce476a0 67dac98df54b d28db07575ac 9aa764225ca3 6c14e4e50817 0b00cd940822 96b13fc13d92 e8c9a9508a99 e8ecb49c95ed 0e2ed0aa0566 0d2737b4b446 e4d5b4323b94 700ba38f29cd 1f9a87a7d94b bd18faf8df7e 819f06ad9f8f 7e18c6962c7e 1405f38eef7c 0bab0a59e548 5fe6f666077c 51a6a22182a5 9b970bc21b00 73d594bc25b2 dab704818c00 63266cd7525c a111425be00e 61ef59d75417]
	I0926 18:15:29.222596    5496 ssh_runner.go:195] Run: docker stop c5d1e02f3410 264e74b184f3 ae6756186a89 aa5128e84e3c 24f91ce476a0 67dac98df54b d28db07575ac 9aa764225ca3 6c14e4e50817 0b00cd940822 96b13fc13d92 e8c9a9508a99 e8ecb49c95ed 0e2ed0aa0566 0d2737b4b446 e4d5b4323b94 700ba38f29cd 1f9a87a7d94b bd18faf8df7e 819f06ad9f8f 7e18c6962c7e 1405f38eef7c 0bab0a59e548 5fe6f666077c 51a6a22182a5 9b970bc21b00 73d594bc25b2 dab704818c00 63266cd7525c a111425be00e 61ef59d75417
	I0926 18:15:29.238263    5496 command_runner.go:130] > c5d1e02f3410
	I0926 18:15:29.238275    5496 command_runner.go:130] > 264e74b184f3
	I0926 18:15:29.238279    5496 command_runner.go:130] > ae6756186a89
	I0926 18:15:29.238282    5496 command_runner.go:130] > aa5128e84e3c
	I0926 18:15:29.238286    5496 command_runner.go:130] > 24f91ce476a0
	I0926 18:15:29.238289    5496 command_runner.go:130] > 67dac98df54b
	I0926 18:15:29.238300    5496 command_runner.go:130] > d28db07575ac
	I0926 18:15:29.238313    5496 command_runner.go:130] > 9aa764225ca3
	I0926 18:15:29.238318    5496 command_runner.go:130] > 6c14e4e50817
	I0926 18:15:29.238323    5496 command_runner.go:130] > 0b00cd940822
	I0926 18:15:29.238326    5496 command_runner.go:130] > 96b13fc13d92
	I0926 18:15:29.238329    5496 command_runner.go:130] > e8c9a9508a99
	I0926 18:15:29.238332    5496 command_runner.go:130] > e8ecb49c95ed
	I0926 18:15:29.238336    5496 command_runner.go:130] > 0e2ed0aa0566
	I0926 18:15:29.238341    5496 command_runner.go:130] > 0d2737b4b446
	I0926 18:15:29.238346    5496 command_runner.go:130] > e4d5b4323b94
	I0926 18:15:29.238349    5496 command_runner.go:130] > 700ba38f29cd
	I0926 18:15:29.238353    5496 command_runner.go:130] > 1f9a87a7d94b
	I0926 18:15:29.238356    5496 command_runner.go:130] > bd18faf8df7e
	I0926 18:15:29.238359    5496 command_runner.go:130] > 819f06ad9f8f
	I0926 18:15:29.238362    5496 command_runner.go:130] > 7e18c6962c7e
	I0926 18:15:29.238367    5496 command_runner.go:130] > 1405f38eef7c
	I0926 18:15:29.238370    5496 command_runner.go:130] > 0bab0a59e548
	I0926 18:15:29.238373    5496 command_runner.go:130] > 5fe6f666077c
	I0926 18:15:29.238377    5496 command_runner.go:130] > 51a6a22182a5
	I0926 18:15:29.238380    5496 command_runner.go:130] > 9b970bc21b00
	I0926 18:15:29.238388    5496 command_runner.go:130] > 73d594bc25b2
	I0926 18:15:29.238392    5496 command_runner.go:130] > dab704818c00
	I0926 18:15:29.238395    5496 command_runner.go:130] > 63266cd7525c
	I0926 18:15:29.238398    5496 command_runner.go:130] > a111425be00e
	I0926 18:15:29.238403    5496 command_runner.go:130] > 61ef59d75417
	I0926 18:15:29.238471    5496 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0926 18:15:29.250881    5496 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0926 18:15:29.258275    5496 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0926 18:15:29.258286    5496 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0926 18:15:29.258293    5496 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0926 18:15:29.258310    5496 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0926 18:15:29.258381    5496 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0926 18:15:29.258389    5496 kubeadm.go:157] found existing configuration files:
	
	I0926 18:15:29.258433    5496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0926 18:15:29.265480    5496 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0926 18:15:29.265501    5496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0926 18:15:29.265542    5496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0926 18:15:29.272854    5496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0926 18:15:29.279864    5496 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0926 18:15:29.279879    5496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0926 18:15:29.279921    5496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0926 18:15:29.287366    5496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0926 18:15:29.294234    5496 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0926 18:15:29.294250    5496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0926 18:15:29.294289    5496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0926 18:15:29.301488    5496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0926 18:15:29.308467    5496 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0926 18:15:29.308484    5496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0926 18:15:29.308528    5496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0926 18:15:29.315784    5496 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0926 18:15:29.323262    5496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0926 18:15:29.387035    5496 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0926 18:15:29.387208    5496 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0926 18:15:29.387366    5496 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0926 18:15:29.387491    5496 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0926 18:15:29.387741    5496 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0926 18:15:29.387801    5496 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0926 18:15:29.388148    5496 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0926 18:15:29.388270    5496 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0926 18:15:29.388465    5496 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0926 18:15:29.388551    5496 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0926 18:15:29.388699    5496 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0926 18:15:29.388894    5496 command_runner.go:130] > [certs] Using the existing "sa" key
	I0926 18:15:29.389752    5496 command_runner.go:130] ! W0927 01:15:29.506652    1381 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0926 18:15:29.389777    5496 command_runner.go:130] ! W0927 01:15:29.507154    1381 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0926 18:15:29.389850    5496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0926 18:15:29.423060    5496 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0926 18:15:29.547278    5496 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0926 18:15:29.932742    5496 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0926 18:15:30.080632    5496 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0926 18:15:30.279123    5496 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0926 18:15:30.476307    5496 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0926 18:15:30.478496    5496 command_runner.go:130] ! W0927 01:15:29.545360    1386 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0926 18:15:30.478520    5496 command_runner.go:130] ! W0927 01:15:29.545862    1386 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0926 18:15:30.478535    5496 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.088664848s)
	I0926 18:15:30.478548    5496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0926 18:15:30.523419    5496 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0926 18:15:30.528646    5496 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0926 18:15:30.528655    5496 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0926 18:15:30.631719    5496 command_runner.go:130] ! W0927 01:15:30.633706    1391 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0926 18:15:30.631742    5496 command_runner.go:130] ! W0927 01:15:30.634198    1391 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0926 18:15:30.631757    5496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0926 18:15:30.683673    5496 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0926 18:15:30.683688    5496 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0926 18:15:30.685442    5496 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0926 18:15:30.686081    5496 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0926 18:15:30.688156    5496 command_runner.go:130] ! W0927 01:15:30.800041    1419 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0926 18:15:30.688173    5496 command_runner.go:130] ! W0927 01:15:30.800677    1419 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0926 18:15:30.688339    5496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0926 18:15:30.744560    5496 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0926 18:15:30.750301    5496 command_runner.go:130] ! W0927 01:15:30.864962    1425 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0926 18:15:30.750323    5496 command_runner.go:130] ! W0927 01:15:30.865788    1425 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0926 18:15:30.750347    5496 api_server.go:52] waiting for apiserver process to appear ...
	I0926 18:15:30.750432    5496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 18:15:31.252589    5496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 18:15:31.752671    5496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 18:15:32.251547    5496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 18:15:32.752318    5496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 18:15:32.763497    5496 command_runner.go:130] > 1714
	I0926 18:15:32.763640    5496 api_server.go:72] duration metric: took 2.01328698s to wait for apiserver process to appear ...
	I0926 18:15:32.763651    5496 api_server.go:88] waiting for apiserver healthz status ...
	I0926 18:15:32.763667    5496 api_server.go:253] Checking apiserver healthz at https://192.169.0.14:8443/healthz ...
	I0926 18:15:35.166029    5496 api_server.go:279] https://192.169.0.14:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0926 18:15:35.166046    5496 api_server.go:103] status: https://192.169.0.14:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0926 18:15:35.166056    5496 api_server.go:253] Checking apiserver healthz at https://192.169.0.14:8443/healthz ...
	I0926 18:15:35.182853    5496 api_server.go:279] https://192.169.0.14:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0926 18:15:35.182869    5496 api_server.go:103] status: https://192.169.0.14:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0926 18:15:35.264983    5496 api_server.go:253] Checking apiserver healthz at https://192.169.0.14:8443/healthz ...
	I0926 18:15:35.270010    5496 api_server.go:279] https://192.169.0.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0926 18:15:35.270024    5496 api_server.go:103] status: https://192.169.0.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0926 18:15:35.764546    5496 api_server.go:253] Checking apiserver healthz at https://192.169.0.14:8443/healthz ...
	I0926 18:15:35.769975    5496 api_server.go:279] https://192.169.0.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0926 18:15:35.769990    5496 api_server.go:103] status: https://192.169.0.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0926 18:15:36.264338    5496 api_server.go:253] Checking apiserver healthz at https://192.169.0.14:8443/healthz ...
	I0926 18:15:36.269447    5496 api_server.go:279] https://192.169.0.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0926 18:15:36.269463    5496 api_server.go:103] status: https://192.169.0.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0926 18:15:36.764566    5496 api_server.go:253] Checking apiserver healthz at https://192.169.0.14:8443/healthz ...
	I0926 18:15:36.768587    5496 api_server.go:279] https://192.169.0.14:8443/healthz returned 200:
	ok
	I0926 18:15:36.768646    5496 round_trippers.go:463] GET https://192.169.0.14:8443/version
	I0926 18:15:36.768652    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:36.768660    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:36.768664    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:36.776855    5496 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0926 18:15:36.776867    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:36.776872    5496 round_trippers.go:580]     Content-Length: 263
	I0926 18:15:36.776875    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:36 GMT
	I0926 18:15:36.776878    5496 round_trippers.go:580]     Audit-Id: 97e34db1-7a8c-4e7f-a5b0-6b08911b79fa
	I0926 18:15:36.776886    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:36.776889    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:36.776892    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:36.776894    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:36.776914    5496 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.1",
	  "gitCommit": "948afe5ca072329a73c8e79ed5938717a5cb3d21",
	  "gitTreeState": "clean",
	  "buildDate": "2024-09-11T21:22:08Z",
	  "goVersion": "go1.22.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0926 18:15:36.776968    5496 api_server.go:141] control plane version: v1.31.1
	I0926 18:15:36.776978    5496 api_server.go:131] duration metric: took 4.013304806s to wait for apiserver health ...
	I0926 18:15:36.776984    5496 cni.go:84] Creating CNI manager for ""
	I0926 18:15:36.776988    5496 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0926 18:15:36.800999    5496 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0926 18:15:36.821520    5496 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0926 18:15:36.825303    5496 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0926 18:15:36.825319    5496 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0926 18:15:36.825328    5496 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0926 18:15:36.825337    5496 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0926 18:15:36.825344    5496 command_runner.go:130] > Access: 2024-09-27 01:15:21.669218995 +0000
	I0926 18:15:36.825351    5496 command_runner.go:130] > Modify: 2024-09-23 21:47:52.000000000 +0000
	I0926 18:15:36.825359    5496 command_runner.go:130] > Change: 2024-09-27 01:15:19.118121505 +0000
	I0926 18:15:36.825366    5496 command_runner.go:130] >  Birth: -
	I0926 18:15:36.825580    5496 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0926 18:15:36.825588    5496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0926 18:15:36.846746    5496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0926 18:15:37.350952    5496 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0926 18:15:37.350967    5496 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0926 18:15:37.350971    5496 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0926 18:15:37.350975    5496 command_runner.go:130] > daemonset.apps/kindnet configured
	I0926 18:15:37.351043    5496 system_pods.go:43] waiting for kube-system pods to appear ...
	I0926 18:15:37.351083    5496 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0926 18:15:37.351093    5496 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0926 18:15:37.351136    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods
	I0926 18:15:37.351141    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:37.351147    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:37.351151    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:37.354427    5496 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 18:15:37.354438    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:37.354444    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:37.354447    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:37 GMT
	I0926 18:15:37.354450    5496 round_trippers.go:580]     Audit-Id: 4fcb048c-626e-4471-a508-621f2f1c02c6
	I0926 18:15:37.354452    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:37.354454    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:37.354457    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:37.355283    5496 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1221"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 89950 chars]
	I0926 18:15:37.359708    5496 system_pods.go:59] 12 kube-system pods found
	I0926 18:15:37.359733    5496 system_pods.go:61] "coredns-7c65d6cfc9-hxdhm" [ff9bbfa0-9278-44d7-abc5-7a38ed77ce23] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 18:15:37.359739    5496 system_pods.go:61] "etcd-multinode-108000" [2a5e99f4-416d-4d75-acd2-33231f5f780d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 18:15:37.359744    5496 system_pods.go:61] "kindnet-ktwmw" [5065643a-e9ee-44a6-a05d-b9154074dd84] Running
	I0926 18:15:37.359747    5496 system_pods.go:61] "kindnet-qlv2x" [08c7f9d2-c689-40b5-95fc-a48157150778] Running
	I0926 18:15:37.359750    5496 system_pods.go:61] "kindnet-wbk29" [a9ff7c3f-b5e1-40e5-ab9d-a38e2696988f] Running
	I0926 18:15:37.359754    5496 system_pods.go:61] "kube-apiserver-multinode-108000" [b8011715-128c-4dfc-94b7-cc9c04907c8a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0926 18:15:37.359759    5496 system_pods.go:61] "kube-controller-manager-multinode-108000" [42fac17d-5eda-41e8-8747-902b605e747f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 18:15:37.359763    5496 system_pods.go:61] "kube-proxy-9kjdl" [979606a2-6bc4-46c0-8333-000bc25722f3] Running
	I0926 18:15:37.359765    5496 system_pods.go:61] "kube-proxy-ngs2x" [f95c0316-b4a8-4f0c-a90b-a88af50fbc68] Running
	I0926 18:15:37.359768    5496 system_pods.go:61] "kube-proxy-pwrqj" [dfc98f0e-705d-41fd-a871-9d4f8455b11d] Running
	I0926 18:15:37.359771    5496 system_pods.go:61] "kube-scheduler-multinode-108000" [e5b482e0-154d-4620-8f24-1ebf181b9c1b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 18:15:37.359775    5496 system_pods.go:61] "storage-provisioner" [e67377e5-f7c5-4625-9739-3703de1f4739] Running
	I0926 18:15:37.359779    5496 system_pods.go:74] duration metric: took 8.729378ms to wait for pod list to return data ...
	I0926 18:15:37.359786    5496 node_conditions.go:102] verifying NodePressure condition ...
	I0926 18:15:37.359823    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes
	I0926 18:15:37.359828    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:37.359833    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:37.359837    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:37.361829    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:15:37.361838    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:37.361846    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:37 GMT
	I0926 18:15:37.361852    5496 round_trippers.go:580]     Audit-Id: f5aa31fb-2428-47cc-8347-f9410728b8bd
	I0926 18:15:37.361859    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:37.361864    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:37.361868    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:37.361874    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:37.362111    5496 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1221"},"items":[{"metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1203","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 10158 chars]
	I0926 18:15:37.362544    5496 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0926 18:15:37.362558    5496 node_conditions.go:123] node cpu capacity is 2
	I0926 18:15:37.362568    5496 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0926 18:15:37.362578    5496 node_conditions.go:123] node cpu capacity is 2
	I0926 18:15:37.362582    5496 node_conditions.go:105] duration metric: took 2.793131ms to run NodePressure ...
	I0926 18:15:37.362592    5496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0926 18:15:37.465696    5496 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0926 18:15:37.619472    5496 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0926 18:15:37.620493    5496 command_runner.go:130] ! W0927 01:15:37.537959    2228 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0926 18:15:37.620510    5496 command_runner.go:130] ! W0927 01:15:37.538519    2228 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0926 18:15:37.620527    5496 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0926 18:15:37.620589    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0926 18:15:37.620595    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:37.620601    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:37.620605    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:37.622467    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:15:37.622496    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:37.622502    5496 round_trippers.go:580]     Audit-Id: e456332e-d319-45cc-b7b7-7af6bdadd549
	I0926 18:15:37.622506    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:37.622509    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:37.622511    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:37.622514    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:37.622518    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:37 GMT
	I0926 18:15:37.622818    5496 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1223"},"items":[{"metadata":{"name":"etcd-multinode-108000","namespace":"kube-system","uid":"2a5e99f4-416d-4d75-acd2-33231f5f780d","resourceVersion":"1206","creationTimestamp":"2024-09-27T01:08:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.14:2379","kubernetes.io/config.hash":"416a07783603924d51ef5ca1abc7c318","kubernetes.io/config.mirror":"416a07783603924d51ef5ca1abc7c318","kubernetes.io/config.seen":"2024-09-27T01:08:53.027445649Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 31223 chars]
	I0926 18:15:37.623551    5496 kubeadm.go:739] kubelet initialised
	I0926 18:15:37.623561    5496 kubeadm.go:740] duration metric: took 3.026723ms waiting for restarted kubelet to initialise ...
	I0926 18:15:37.623568    5496 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0926 18:15:37.623599    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods
	I0926 18:15:37.623604    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:37.623610    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:37.623614    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:37.625151    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:15:37.625158    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:37.625165    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:37.625168    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:37.625174    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:37 GMT
	I0926 18:15:37.625179    5496 round_trippers.go:580]     Audit-Id: 3921b1e0-be8a-4be0-b8ec-28e8ca02b5d7
	I0926 18:15:37.625183    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:37.625187    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:37.625889    5496 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1223"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 89950 chars]
	I0926 18:15:37.627833    5496 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-hxdhm" in "kube-system" namespace to be "Ready" ...
	I0926 18:15:37.627880    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:15:37.627885    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:37.627891    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:37.627896    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:37.629167    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:15:37.629174    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:37.629178    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:37.629182    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:37.629185    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:37 GMT
	I0926 18:15:37.629188    5496 round_trippers.go:580]     Audit-Id: bf8c117e-157f-4745-a8ed-a8c3ab5e3832
	I0926 18:15:37.629190    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:37.629193    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:37.629485    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:15:37.629739    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:37.629746    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:37.629753    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:37.629758    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:37.631130    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:15:37.631136    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:37.631141    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:37.631143    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:37.631147    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:37 GMT
	I0926 18:15:37.631149    5496 round_trippers.go:580]     Audit-Id: 5dfbf840-3705-44b6-b981-b1a6c84753e7
	I0926 18:15:37.631152    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:37.631155    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:37.631299    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1203","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5300 chars]
	I0926 18:15:37.631481    5496 pod_ready.go:98] node "multinode-108000" hosting pod "coredns-7c65d6cfc9-hxdhm" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-108000" has status "Ready":"False"
	I0926 18:15:37.631491    5496 pod_ready.go:82] duration metric: took 3.649288ms for pod "coredns-7c65d6cfc9-hxdhm" in "kube-system" namespace to be "Ready" ...
	E0926 18:15:37.631497    5496 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-108000" hosting pod "coredns-7c65d6cfc9-hxdhm" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-108000" has status "Ready":"False"
	I0926 18:15:37.631503    5496 pod_ready.go:79] waiting up to 4m0s for pod "etcd-multinode-108000" in "kube-system" namespace to be "Ready" ...
	I0926 18:15:37.631533    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-108000
	I0926 18:15:37.631537    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:37.631542    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:37.631550    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:37.633180    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:15:37.633186    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:37.633191    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:37.633195    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:37.633199    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:37.633201    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:37 GMT
	I0926 18:15:37.633204    5496 round_trippers.go:580]     Audit-Id: 80d3d23d-8c8d-4d9e-81ab-6f6b586c7476
	I0926 18:15:37.633206    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:37.633493    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-108000","namespace":"kube-system","uid":"2a5e99f4-416d-4d75-acd2-33231f5f780d","resourceVersion":"1206","creationTimestamp":"2024-09-27T01:08:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.14:2379","kubernetes.io/config.hash":"416a07783603924d51ef5ca1abc7c318","kubernetes.io/config.mirror":"416a07783603924d51ef5ca1abc7c318","kubernetes.io/config.seen":"2024-09-27T01:08:53.027445649Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6888 chars]
	I0926 18:15:37.633723    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:37.633730    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:37.633736    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:37.633739    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:37.634919    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:15:37.634926    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:37.634931    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:37.634935    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:37.634939    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:37.634943    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:37 GMT
	I0926 18:15:37.634946    5496 round_trippers.go:580]     Audit-Id: ca68bf5a-bba4-476e-ad7f-28a326e90032
	I0926 18:15:37.634950    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:37.635121    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1203","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5300 chars]
	I0926 18:15:37.635314    5496 pod_ready.go:98] node "multinode-108000" hosting pod "etcd-multinode-108000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-108000" has status "Ready":"False"
	I0926 18:15:37.635325    5496 pod_ready.go:82] duration metric: took 3.817202ms for pod "etcd-multinode-108000" in "kube-system" namespace to be "Ready" ...
	E0926 18:15:37.635331    5496 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-108000" hosting pod "etcd-multinode-108000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-108000" has status "Ready":"False"
	I0926 18:15:37.635342    5496 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-multinode-108000" in "kube-system" namespace to be "Ready" ...
	I0926 18:15:37.635373    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-108000
	I0926 18:15:37.635378    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:37.635383    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:37.635388    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:37.636642    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:15:37.636649    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:37.636653    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:37.636657    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:37 GMT
	I0926 18:15:37.636661    5496 round_trippers.go:580]     Audit-Id: d622a5b6-1484-4490-943a-0979fe9146ed
	I0926 18:15:37.636667    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:37.636669    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:37.636671    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:37.636814    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-108000","namespace":"kube-system","uid":"b8011715-128c-4dfc-94b7-cc9c04907c8a","resourceVersion":"1209","creationTimestamp":"2024-09-27T01:08:53Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.14:8443","kubernetes.io/config.hash":"3b2e0fdf454135a81bc6cacb88271d66","kubernetes.io/config.mirror":"3b2e0fdf454135a81bc6cacb88271d66","kubernetes.io/config.seen":"2024-09-27T01:08:53.027447712Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 8136 chars]
	I0926 18:15:37.637064    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:37.637071    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:37.637077    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:37.637080    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:37.638194    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:15:37.638202    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:37.638208    5496 round_trippers.go:580]     Audit-Id: c1a3a6a4-2f27-46f2-9e8f-d52bd9ff3bb7
	I0926 18:15:37.638212    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:37.638216    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:37.638219    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:37.638223    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:37.638227    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:37 GMT
	I0926 18:15:37.638336    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1203","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5300 chars]
	I0926 18:15:37.638508    5496 pod_ready.go:98] node "multinode-108000" hosting pod "kube-apiserver-multinode-108000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-108000" has status "Ready":"False"
	I0926 18:15:37.638517    5496 pod_ready.go:82] duration metric: took 3.169848ms for pod "kube-apiserver-multinode-108000" in "kube-system" namespace to be "Ready" ...
	E0926 18:15:37.638522    5496 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-108000" hosting pod "kube-apiserver-multinode-108000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-108000" has status "Ready":"False"
	I0926 18:15:37.638528    5496 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-multinode-108000" in "kube-system" namespace to be "Ready" ...
	I0926 18:15:37.638557    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-108000
	I0926 18:15:37.638562    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:37.638568    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:37.638570    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:37.639598    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:15:37.639605    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:37.639610    5496 round_trippers.go:580]     Audit-Id: 2a845ccb-913b-4e6b-97df-fc34682663e2
	I0926 18:15:37.639614    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:37.639617    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:37.639621    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:37.639625    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:37.639633    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:37 GMT
	I0926 18:15:37.639799    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-108000","namespace":"kube-system","uid":"42fac17d-5eda-41e8-8747-902b605e747f","resourceVersion":"1210","creationTimestamp":"2024-09-27T01:08:53Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fec5fbfbd6a0fb8784a74d22da6a6ca2","kubernetes.io/config.mirror":"fec5fbfbd6a0fb8784a74d22da6a6ca2","kubernetes.io/config.seen":"2024-09-27T01:08:53.027448437Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7727 chars]
	I0926 18:15:37.752679    5496 request.go:632] Waited for 112.618774ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:37.752773    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:37.752782    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:37.752793    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:37.752800    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:37.755216    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:37.755228    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:37.755236    5496 round_trippers.go:580]     Audit-Id: 89f83d82-d21a-481c-8402-15e416c8d851
	I0926 18:15:37.755241    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:37.755245    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:37.755253    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:37.755262    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:37.755267    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:37 GMT
	I0926 18:15:37.755466    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1203","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5300 chars]
	I0926 18:15:37.755727    5496 pod_ready.go:98] node "multinode-108000" hosting pod "kube-controller-manager-multinode-108000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-108000" has status "Ready":"False"
	I0926 18:15:37.755741    5496 pod_ready.go:82] duration metric: took 117.206782ms for pod "kube-controller-manager-multinode-108000" in "kube-system" namespace to be "Ready" ...
	E0926 18:15:37.755750    5496 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-108000" hosting pod "kube-controller-manager-multinode-108000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-108000" has status "Ready":"False"
	I0926 18:15:37.755758    5496 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-9kjdl" in "kube-system" namespace to be "Ready" ...
	I0926 18:15:37.951740    5496 request.go:632] Waited for 195.930354ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9kjdl
	I0926 18:15:37.951848    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9kjdl
	I0926 18:15:37.951860    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:37.951871    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:37.951877    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:37.954934    5496 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 18:15:37.954954    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:37.954962    5496 round_trippers.go:580]     Audit-Id: 49b3a949-470e-45ea-a4c2-b9d8c79e513c
	I0926 18:15:37.954967    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:37.954971    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:37.954974    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:37.954978    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:37.954981    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:38 GMT
	I0926 18:15:37.955148    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9kjdl","generateName":"kube-proxy-","namespace":"kube-system","uid":"979606a2-6bc4-46c0-8333-000bc25722f3","resourceVersion":"1221","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ec618504-e724-4298-bf49-56d9dcae8b25","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec618504-e724-4298-bf49-56d9dcae8b25\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6395 chars]
	I0926 18:15:38.153228    5496 request.go:632] Waited for 197.663213ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:38.153375    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:38.153386    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:38.153397    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:38.153404    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:38.155572    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:38.155584    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:38.155590    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:38.155595    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:38.155598    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:38 GMT
	I0926 18:15:38.155603    5496 round_trippers.go:580]     Audit-Id: 6c454f18-073a-4419-a617-9b93353b93ec
	I0926 18:15:38.155606    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:38.155610    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:38.156089    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1203","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5300 chars]
	I0926 18:15:38.156356    5496 pod_ready.go:98] node "multinode-108000" hosting pod "kube-proxy-9kjdl" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-108000" has status "Ready":"False"
	I0926 18:15:38.156369    5496 pod_ready.go:82] duration metric: took 400.60195ms for pod "kube-proxy-9kjdl" in "kube-system" namespace to be "Ready" ...
	E0926 18:15:38.156377    5496 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-108000" hosting pod "kube-proxy-9kjdl" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-108000" has status "Ready":"False"
	I0926 18:15:38.156390    5496 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-ngs2x" in "kube-system" namespace to be "Ready" ...
	I0926 18:15:38.352472    5496 request.go:632] Waited for 195.963109ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ngs2x
	I0926 18:15:38.352547    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ngs2x
	I0926 18:15:38.352557    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:38.352568    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:38.352575    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:38.354731    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:38.354744    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:38.354757    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:38.354763    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:38.354769    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:38.354776    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:38.354780    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:38 GMT
	I0926 18:15:38.354783    5496 round_trippers.go:580]     Audit-Id: 13b915e8-c5a5-4ba8-a37e-887fcb24c5e8
	I0926 18:15:38.354964    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ngs2x","generateName":"kube-proxy-","namespace":"kube-system","uid":"f95c0316-b4a8-4f0c-a90b-a88af50fbc68","resourceVersion":"1040","creationTimestamp":"2024-09-27T01:09:40Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ec618504-e724-4298-bf49-56d9dcae8b25","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:09:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec618504-e724-4298-bf49-56d9dcae8b25\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6198 chars]
	I0926 18:15:38.551339    5496 request.go:632] Waited for 195.960776ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-108000-m02
	I0926 18:15:38.551415    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000-m02
	I0926 18:15:38.551421    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:38.551427    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:38.551432    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:38.553157    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:15:38.553168    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:38.553173    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:38.553183    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:38.553186    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:38 GMT
	I0926 18:15:38.553189    5496 round_trippers.go:580]     Audit-Id: 353635c7-2e16-43cf-b82a-1460b8b14ef7
	I0926 18:15:38.553192    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:38.553195    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:38.553266    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000-m02","uid":"653db940-78e0-431e-befd-25309d2a6cc8","resourceVersion":"1071","creationTimestamp":"2024-09-27T01:13:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_26T18_13_29_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:13:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3812 chars]
	I0926 18:15:38.553454    5496 pod_ready.go:93] pod "kube-proxy-ngs2x" in "kube-system" namespace has status "Ready":"True"
	I0926 18:15:38.553462    5496 pod_ready.go:82] duration metric: took 397.064629ms for pod "kube-proxy-ngs2x" in "kube-system" namespace to be "Ready" ...
	I0926 18:15:38.553469    5496 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-pwrqj" in "kube-system" namespace to be "Ready" ...
	I0926 18:15:38.751250    5496 request.go:632] Waited for 197.739854ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pwrqj
	I0926 18:15:38.751281    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pwrqj
	I0926 18:15:38.751286    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:38.751292    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:38.751296    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:38.752892    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:15:38.752901    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:38.752907    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:38.752922    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:38.752930    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:38.752932    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:38 GMT
	I0926 18:15:38.752935    5496 round_trippers.go:580]     Audit-Id: 5b3bd9ca-5e5e-4b9c-8224-a8d0e4244a1b
	I0926 18:15:38.752942    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:38.753015    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pwrqj","generateName":"kube-proxy-","namespace":"kube-system","uid":"dfc98f0e-705d-41fd-a871-9d4f8455b11d","resourceVersion":"1158","creationTimestamp":"2024-09-27T01:10:31Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ec618504-e724-4298-bf49-56d9dcae8b25","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:10:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec618504-e724-4298-bf49-56d9dcae8b25\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6198 chars]
	I0926 18:15:38.951829    5496 request.go:632] Waited for 198.542661ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-108000-m03
	I0926 18:15:38.951884    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000-m03
	I0926 18:15:38.951890    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:38.951896    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:38.951899    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:38.953525    5496 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0926 18:15:38.953533    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:38.953538    5496 round_trippers.go:580]     Content-Length: 210
	I0926 18:15:38.953541    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:39 GMT
	I0926 18:15:38.953543    5496 round_trippers.go:580]     Audit-Id: ea2855c8-1c82-4436-8d56-374bdd2e4173
	I0926 18:15:38.953545    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:38.953548    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:38.953550    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:38.953556    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:38.953569    5496 request.go:1351] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-108000-m03\" not found","reason":"NotFound","details":{"name":"multinode-108000-m03","kind":"nodes"},"code":404}
	I0926 18:15:38.953689    5496 pod_ready.go:98] node "multinode-108000-m03" hosting pod "kube-proxy-pwrqj" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-108000-m03": nodes "multinode-108000-m03" not found
	I0926 18:15:38.953698    5496 pod_ready.go:82] duration metric: took 400.222259ms for pod "kube-proxy-pwrqj" in "kube-system" namespace to be "Ready" ...
	E0926 18:15:38.953704    5496 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-108000-m03" hosting pod "kube-proxy-pwrqj" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-108000-m03": nodes "multinode-108000-m03" not found
	I0926 18:15:38.953712    5496 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-multinode-108000" in "kube-system" namespace to be "Ready" ...
	I0926 18:15:39.151391    5496 request.go:632] Waited for 197.640189ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-108000
	I0926 18:15:39.151456    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-108000
	I0926 18:15:39.151463    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:39.151472    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:39.151478    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:39.153561    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:39.153573    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:39.153579    5496 round_trippers.go:580]     Audit-Id: 77b55748-0b02-493c-be9d-e0d00bcb9c4a
	I0926 18:15:39.153583    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:39.153586    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:39.153588    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:39.153592    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:39.153595    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:39 GMT
	I0926 18:15:39.153764    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-108000","namespace":"kube-system","uid":"e5b482e0-154d-4620-8f24-1ebf181b9c1b","resourceVersion":"1207","creationTimestamp":"2024-09-27T01:08:53Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"40cf241c42ae492b94bc92cec52f27f4","kubernetes.io/config.mirror":"40cf241c42ae492b94bc92cec52f27f4","kubernetes.io/config.seen":"2024-09-27T01:08:53.027449029Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5439 chars]
	I0926 18:15:39.351587    5496 request.go:632] Waited for 197.563253ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:39.351681    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:39.351701    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:39.351714    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:39.351722    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:39.356273    5496 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0926 18:15:39.356285    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:39.356290    5496 round_trippers.go:580]     Audit-Id: cee47110-3992-4d7a-a6ce-e4b45276cf1d
	I0926 18:15:39.356294    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:39.356297    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:39.356299    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:39.356301    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:39.356304    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:39 GMT
	I0926 18:15:39.356418    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:39.356629    5496 pod_ready.go:98] node "multinode-108000" hosting pod "kube-scheduler-multinode-108000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-108000" has status "Ready":"False"
	I0926 18:15:39.356639    5496 pod_ready.go:82] duration metric: took 402.920719ms for pod "kube-scheduler-multinode-108000" in "kube-system" namespace to be "Ready" ...
	E0926 18:15:39.356646    5496 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-108000" hosting pod "kube-scheduler-multinode-108000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-108000" has status "Ready":"False"
	I0926 18:15:39.356652    5496 pod_ready.go:39] duration metric: took 1.733070528s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0926 18:15:39.356664    5496 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0926 18:15:39.367880    5496 command_runner.go:130] > -16
	I0926 18:15:39.367906    5496 ops.go:34] apiserver oom_adj: -16
	I0926 18:15:39.367911    5496 kubeadm.go:597] duration metric: took 10.178588037s to restartPrimaryControlPlane
	I0926 18:15:39.367916    5496 kubeadm.go:394] duration metric: took 10.201506133s to StartCluster
	I0926 18:15:39.367931    5496 settings.go:142] acquiring lock: {Name:mka8948d0f70add5c5f20f2eca7124a97a496c1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 18:15:39.368021    5496 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19711-1128/kubeconfig
	I0926 18:15:39.368410    5496 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1128/kubeconfig: {Name:mkb9c069c578c490d8cb734b05a21821e9215482 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 18:15:39.368773    5496 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 18:15:39.368842    5496 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0926 18:15:39.368928    5496 config.go:182] Loaded profile config "multinode-108000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 18:15:39.411973    5496 out.go:177] * Verifying Kubernetes components...
	I0926 18:15:39.469853    5496 out.go:177] * Enabled addons: 
	I0926 18:15:39.490939    5496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 18:15:39.511963    5496 addons.go:510] duration metric: took 143.130056ms for enable addons: enabled=[]
	I0926 18:15:39.632619    5496 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 18:15:39.645920    5496 node_ready.go:35] waiting up to 6m0s for node "multinode-108000" to be "Ready" ...
	I0926 18:15:39.645984    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:39.645990    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:39.645996    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:39.645999    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:39.647771    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:15:39.647779    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:39.647784    5496 round_trippers.go:580]     Audit-Id: 2af6b540-0cb6-4cb7-9d23-5549f361b2a8
	I0926 18:15:39.647787    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:39.647790    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:39.647792    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:39.647794    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:39.647796    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:39 GMT
	I0926 18:15:39.647973    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:40.147221    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:40.147241    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:40.147250    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:40.147257    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:40.149046    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:15:40.149059    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:40.149068    5496 round_trippers.go:580]     Audit-Id: 6eef010a-be8d-4142-b1ab-c4e8fb8b8a6d
	I0926 18:15:40.149074    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:40.149082    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:40.149090    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:40.149095    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:40.149100    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:40 GMT
	I0926 18:15:40.149314    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:40.647082    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:40.647109    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:40.647121    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:40.647125    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:40.649949    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:40.649963    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:40.649970    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:40.649975    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:40.649979    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:40.649983    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:40 GMT
	I0926 18:15:40.649987    5496 round_trippers.go:580]     Audit-Id: 8142bc98-9616-4eed-8557-f89eb4761b93
	I0926 18:15:40.649990    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:40.650155    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:41.147405    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:41.147428    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:41.147440    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:41.147447    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:41.150144    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:41.150156    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:41.150163    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:41.150167    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:41 GMT
	I0926 18:15:41.150172    5496 round_trippers.go:580]     Audit-Id: f4bc574e-0a0f-4946-ba1b-cec1a8d52514
	I0926 18:15:41.150176    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:41.150180    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:41.150185    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:41.150624    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:41.648210    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:41.648234    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:41.648246    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:41.648252    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:41.651452    5496 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 18:15:41.651468    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:41.651475    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:41 GMT
	I0926 18:15:41.651478    5496 round_trippers.go:580]     Audit-Id: 828de500-a31f-4597-abde-16b7a5328d86
	I0926 18:15:41.651482    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:41.651486    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:41.651514    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:41.651523    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:41.651615    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:41.651871    5496 node_ready.go:53] node "multinode-108000" has status "Ready":"False"
	I0926 18:15:42.147225    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:42.147245    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:42.147256    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:42.147262    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:42.149721    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:42.149735    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:42.149742    5496 round_trippers.go:580]     Audit-Id: 31bfc2e8-f07f-46ca-9047-a6aa9755b1d7
	I0926 18:15:42.149747    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:42.149750    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:42.149754    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:42.149757    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:42.149760    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:42 GMT
	I0926 18:15:42.149931    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:42.648266    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:42.648290    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:42.648302    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:42.648310    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:42.651671    5496 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 18:15:42.651688    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:42.651695    5496 round_trippers.go:580]     Audit-Id: 9aac7977-8719-4334-b9ac-7cc704bfbe28
	I0926 18:15:42.651699    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:42.651710    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:42.651717    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:42.651720    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:42.651723    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:42 GMT
	I0926 18:15:42.652089    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:43.146122    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:43.146151    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:43.146208    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:43.146215    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:43.148579    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:43.148595    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:43.148601    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:43.148606    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:43 GMT
	I0926 18:15:43.148609    5496 round_trippers.go:580]     Audit-Id: ab475b3c-fd55-4a73-8937-b0e4cf8651e7
	I0926 18:15:43.148613    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:43.148616    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:43.148620    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:43.148731    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:43.646565    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:43.646593    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:43.646606    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:43.646628    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:43.649347    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:43.649371    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:43.649382    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:43.649390    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:43.649397    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:43.649404    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:43 GMT
	I0926 18:15:43.649412    5496 round_trippers.go:580]     Audit-Id: a3242efd-e344-4614-ac4e-da66db94e4ac
	I0926 18:15:43.649418    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:43.649563    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:44.146285    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:44.146309    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:44.146326    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:44.146333    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:44.149072    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:44.149087    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:44.149094    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:44.149098    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:44.149101    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:44.149104    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:44 GMT
	I0926 18:15:44.149108    5496 round_trippers.go:580]     Audit-Id: c78d9ad7-b266-4e97-bc18-d776ed1ec708
	I0926 18:15:44.149110    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:44.149347    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:44.149615    5496 node_ready.go:53] node "multinode-108000" has status "Ready":"False"
	I0926 18:15:44.646839    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:44.646914    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:44.646944    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:44.646951    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:44.649331    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:44.649343    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:44.649349    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:44.649380    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:44.649384    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:44.649387    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:44.649390    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:44 GMT
	I0926 18:15:44.649393    5496 round_trippers.go:580]     Audit-Id: f16df67f-5c78-4531-95cc-bddd6e410c30
	I0926 18:15:44.649499    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:45.146768    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:45.146791    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:45.146803    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:45.146811    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:45.149481    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:45.149497    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:45.149504    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:45.149509    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:45.149516    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:45.149520    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:45 GMT
	I0926 18:15:45.149524    5496 round_trippers.go:580]     Audit-Id: 294c5f7c-95aa-44c0-b8ab-ffb11bd65994
	I0926 18:15:45.149528    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:45.149826    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:45.647608    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:45.647635    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:45.647647    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:45.647653    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:45.650647    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:45.650662    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:45.650669    5496 round_trippers.go:580]     Audit-Id: e1c797e2-a13a-48fb-a9e7-6b8c5114275e
	I0926 18:15:45.650673    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:45.650676    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:45.650680    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:45.650683    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:45.650686    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:45 GMT
	I0926 18:15:45.650750    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:46.147387    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:46.147416    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:46.147428    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:46.147434    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:46.150179    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:46.150194    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:46.150200    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:46.150205    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:46 GMT
	I0926 18:15:46.150208    5496 round_trippers.go:580]     Audit-Id: 74a0db55-27bf-4445-9980-c9e959b41522
	I0926 18:15:46.150212    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:46.150215    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:46.150219    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:46.150456    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:46.150715    5496 node_ready.go:53] node "multinode-108000" has status "Ready":"False"
	I0926 18:15:46.648183    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:46.648246    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:46.648259    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:46.648266    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:46.650780    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:46.650795    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:46.650801    5496 round_trippers.go:580]     Audit-Id: 3d3553d4-ea47-4dbd-9bc7-9ca44dfc10bb
	I0926 18:15:46.650804    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:46.650807    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:46.650811    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:46.650813    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:46.650817    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:46 GMT
	I0926 18:15:46.650911    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:47.147243    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:47.147269    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:47.147281    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:47.147286    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:47.150053    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:47.150065    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:47.150107    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:47.150122    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:47.150126    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:47 GMT
	I0926 18:15:47.150133    5496 round_trippers.go:580]     Audit-Id: a51c9902-f023-498d-8e55-8a8baf3a507e
	I0926 18:15:47.150137    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:47.150142    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:47.150338    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:47.647019    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:47.647041    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:47.647053    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:47.647060    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:47.649853    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:47.649870    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:47.649881    5496 round_trippers.go:580]     Audit-Id: d61c241f-93a8-48df-ae52-24d212411a49
	I0926 18:15:47.649888    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:47.649893    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:47.649897    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:47.649901    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:47.649905    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:47 GMT
	I0926 18:15:47.649974    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:48.146586    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:48.146600    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:48.146607    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:48.146609    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:48.148507    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:15:48.148519    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:48.148524    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:48.148527    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:48.148529    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:48.148532    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:48 GMT
	I0926 18:15:48.148535    5496 round_trippers.go:580]     Audit-Id: 15e3c12e-ba63-4c5e-9b0a-d314bdefe032
	I0926 18:15:48.148537    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:48.149117    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:48.646462    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:48.646501    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:48.646509    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:48.646514    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:48.648875    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:48.648887    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:48.648892    5496 round_trippers.go:580]     Audit-Id: 2befc4ab-814c-4256-8da7-1050a0cca48d
	I0926 18:15:48.648895    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:48.648898    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:48.648900    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:48.648903    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:48.648905    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:48 GMT
	I0926 18:15:48.649208    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:48.649414    5496 node_ready.go:53] node "multinode-108000" has status "Ready":"False"
	I0926 18:15:49.147028    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:49.147047    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:49.147058    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:49.147064    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:49.149308    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:49.149320    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:49.149327    5496 round_trippers.go:580]     Audit-Id: 15cc7eff-6de1-4a45-8be6-a4f58b18f504
	I0926 18:15:49.149334    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:49.149341    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:49.149348    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:49.149355    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:49.149360    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:49 GMT
	I0926 18:15:49.149557    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:49.647633    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:49.647654    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:49.647666    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:49.647670    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:49.650288    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:49.650304    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:49.650312    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:49.650324    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:49.650330    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:49.650333    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:49.650338    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:49 GMT
	I0926 18:15:49.650342    5496 round_trippers.go:580]     Audit-Id: 35bb502e-67f0-4d85-9636-31fc930b739e
	I0926 18:15:49.650451    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:50.147292    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:50.147329    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:50.147337    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:50.147342    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:50.149517    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:50.149531    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:50.149539    5496 round_trippers.go:580]     Audit-Id: 835635e1-2c69-4eb5-b337-1d8a755e0397
	I0926 18:15:50.149544    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:50.149548    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:50.149552    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:50.149557    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:50.149561    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:50 GMT
	I0926 18:15:50.149647    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:50.646117    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:50.646172    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:50.646188    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:50.646197    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:50.648096    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:15:50.648111    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:50.648117    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:50.648122    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:50.648125    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:50.648127    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:50.648131    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:50 GMT
	I0926 18:15:50.648134    5496 round_trippers.go:580]     Audit-Id: 52a44732-48d9-42b4-ad51-ae93b9e48478
	I0926 18:15:50.648295    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:51.147605    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:51.147631    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:51.147643    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:51.147652    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:51.150683    5496 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 18:15:51.150702    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:51.150713    5496 round_trippers.go:580]     Audit-Id: e1e22eb3-0afb-4cc7-9274-59cdda58db39
	I0926 18:15:51.150719    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:51.150726    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:51.150731    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:51.150736    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:51.150741    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:51 GMT
	I0926 18:15:51.150914    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:51.151178    5496 node_ready.go:53] node "multinode-108000" has status "Ready":"False"
	I0926 18:15:51.646326    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:51.646431    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:51.646447    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:51.646453    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:51.649131    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:51.649146    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:51.649153    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:51 GMT
	I0926 18:15:51.649158    5496 round_trippers.go:580]     Audit-Id: 9cb91ab4-2d11-4532-a40b-cc1b5ee81802
	I0926 18:15:51.649161    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:51.649164    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:51.649168    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:51.649174    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:51.649593    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:52.148217    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:52.148251    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:52.148262    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:52.148268    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:52.150889    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:52.150904    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:52.150914    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:52.150919    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:52.150923    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:52.150929    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:52 GMT
	I0926 18:15:52.150933    5496 round_trippers.go:580]     Audit-Id: 6406a369-6a5f-4b79-b384-f8195f179467
	I0926 18:15:52.150939    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:52.151107    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:52.646242    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:52.646270    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:52.646282    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:52.646298    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:52.648904    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:52.648917    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:52.648924    5496 round_trippers.go:580]     Audit-Id: 8d595130-e0e8-4a9f-8325-ecd9c038b2ec
	I0926 18:15:52.648928    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:52.648933    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:52.648937    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:52.648941    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:52.648945    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:52 GMT
	I0926 18:15:52.649050    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:53.202365    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:53.202394    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:53.202406    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:53.202413    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:53.205265    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:53.205284    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:53.205291    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:53.205296    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:53.205299    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:53.205302    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:53.205305    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:53 GMT
	I0926 18:15:53.205309    5496 round_trippers.go:580]     Audit-Id: 3c1e570b-fb26-49b4-acd4-de9d6accf47d
	I0926 18:15:53.205527    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:53.700967    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:53.701022    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:53.701037    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:53.701043    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:53.703812    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:53.703827    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:53.703834    5496 round_trippers.go:580]     Audit-Id: da999e41-17c5-45fe-826f-98d56efdbc9d
	I0926 18:15:53.703839    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:53.703842    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:53.703845    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:53.703850    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:53.703854    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:53 GMT
	I0926 18:15:53.703954    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:53.704211    5496 node_ready.go:53] node "multinode-108000" has status "Ready":"False"
	I0926 18:15:54.202328    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:54.202355    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:54.202367    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:54.202375    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:54.205451    5496 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 18:15:54.205480    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:54.205506    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:54 GMT
	I0926 18:15:54.205512    5496 round_trippers.go:580]     Audit-Id: 25373c7c-92f8-4dda-b00b-5c0e1b198873
	I0926 18:15:54.205516    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:54.205520    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:54.205525    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:54.205530    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:54.205621    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:54.701377    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:54.701401    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:54.701410    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:54.701416    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:54.703608    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:54.703620    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:54.703624    5496 round_trippers.go:580]     Audit-Id: e7e9133c-3f51-4fbf-823b-1bfe8e1cee58
	I0926 18:15:54.703628    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:54.703630    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:54.703632    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:54.703636    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:54.703638    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:54 GMT
	I0926 18:15:54.703967    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:55.201675    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:55.201703    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:55.201743    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:55.201753    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:55.204412    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:55.204427    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:55.204434    5496 round_trippers.go:580]     Audit-Id: 5c53dcba-4548-48bd-acb1-dfed0a263959
	I0926 18:15:55.204440    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:55.204446    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:55.204449    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:55.204453    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:55.204457    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:55 GMT
	I0926 18:15:55.204593    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:55.701211    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:55.701249    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:55.701271    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:55.701294    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:55.703603    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:55.703611    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:55.703616    5496 round_trippers.go:580]     Audit-Id: d5c9b478-c027-41c6-ad2a-3c9bcc0d6c32
	I0926 18:15:55.703620    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:55.703624    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:55.703627    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:55.703632    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:55.703637    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:55 GMT
	I0926 18:15:55.703858    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1237","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5516 chars]
	I0926 18:15:56.203084    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:56.203106    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:56.203118    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:56.203124    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:56.205928    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:56.205942    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:56.205949    5496 round_trippers.go:580]     Audit-Id: 122d71e6-ac12-4098-9ded-b9c3a04efc33
	I0926 18:15:56.205954    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:56.205980    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:56.205988    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:56.205992    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:56.205997    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:56 GMT
	I0926 18:15:56.206116    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1348","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5293 chars]
	I0926 18:15:56.206370    5496 node_ready.go:49] node "multinode-108000" has status "Ready":"True"
	I0926 18:15:56.206386    5496 node_ready.go:38] duration metric: took 16.505593065s for node "multinode-108000" to be "Ready" ...
	I0926 18:15:56.206394    5496 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0926 18:15:56.206440    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods
	I0926 18:15:56.206448    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:56.206456    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:56.206461    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:56.208603    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:56.208616    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:56.208624    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:56.208630    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:56.208633    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:56 GMT
	I0926 18:15:56.208635    5496 round_trippers.go:580]     Audit-Id: 61da8b37-0a42-4435-b609-7377a91b7d3e
	I0926 18:15:56.208639    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:56.208642    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:56.209824    5496 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1349"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 88976 chars]
	I0926 18:15:56.211733    5496 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hxdhm" in "kube-system" namespace to be "Ready" ...
	I0926 18:15:56.211771    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:15:56.211777    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:56.211783    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:56.211787    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:56.212766    5496 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0926 18:15:56.212775    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:56.212781    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:56.212785    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:56.212791    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:56.212795    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:56.212797    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:56 GMT
	I0926 18:15:56.212800    5496 round_trippers.go:580]     Audit-Id: 8e5a3db4-8372-4799-9a3d-207543401e6a
	I0926 18:15:56.212967    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:15:56.213209    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:56.213217    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:56.213223    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:56.213227    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:56.214083    5496 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0926 18:15:56.214092    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:56.214099    5496 round_trippers.go:580]     Audit-Id: 2d49d4ec-0c96-4aef-b854-171e10728aab
	I0926 18:15:56.214104    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:56.214108    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:56.214113    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:56.214117    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:56.214120    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:56 GMT
	I0926 18:15:56.214282    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1348","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5293 chars]
	I0926 18:15:56.714012    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:15:56.714038    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:56.714050    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:56.714057    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:56.716847    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:56.716860    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:56.716867    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:56 GMT
	I0926 18:15:56.716871    5496 round_trippers.go:580]     Audit-Id: 672c41eb-7827-4345-ab5f-c4034692866c
	I0926 18:15:56.716898    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:56.716904    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:56.716911    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:56.716917    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:56.717040    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:15:56.717409    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:56.717419    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:56.717427    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:56.717432    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:56.718807    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:15:56.718815    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:56.718822    5496 round_trippers.go:580]     Audit-Id: 1eba1797-7604-48e5-b4ce-809a8efa23bf
	I0926 18:15:56.718826    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:56.718831    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:56.718835    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:56.718838    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:56.718841    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:56 GMT
	I0926 18:15:56.719060    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1348","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5293 chars]
	I0926 18:15:57.212304    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:15:57.212327    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:57.212339    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:57.212346    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:57.215305    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:57.215319    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:57.215326    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:57 GMT
	I0926 18:15:57.215331    5496 round_trippers.go:580]     Audit-Id: d2a664ee-7641-4337-9bf7-0aeb4ca9f53a
	I0926 18:15:57.215335    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:57.215339    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:57.215342    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:57.215346    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:57.215467    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:15:57.215841    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:57.215850    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:57.215858    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:57.215863    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:57.217440    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:15:57.217449    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:57.217454    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:57.217457    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:57 GMT
	I0926 18:15:57.217461    5496 round_trippers.go:580]     Audit-Id: ea932f85-7c75-4b35-b73b-465e2c509b9d
	I0926 18:15:57.217463    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:57.217468    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:57.217472    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:57.217802    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1348","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5293 chars]
	I0926 18:15:57.712215    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:15:57.712237    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:57.712249    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:57.712253    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:57.714882    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:57.714896    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:57.714904    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:57 GMT
	I0926 18:15:57.714908    5496 round_trippers.go:580]     Audit-Id: 16a30886-7e5b-42b3-a174-2be2e25ac06c
	I0926 18:15:57.714912    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:57.714915    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:57.714964    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:57.714972    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:57.715080    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:15:57.715455    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:57.715464    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:57.715472    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:57.715477    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:57.716752    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:15:57.716760    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:57.716764    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:57.716768    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:57.716770    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:57.716774    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:57 GMT
	I0926 18:15:57.716776    5496 round_trippers.go:580]     Audit-Id: 7d16a94f-4d2f-410f-9f26-8680795887e3
	I0926 18:15:57.716781    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:57.717030    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1348","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5293 chars]
	I0926 18:15:58.212412    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:15:58.212433    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:58.212444    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:58.212450    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:58.214715    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:58.214728    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:58.214735    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:58.214738    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:58.214742    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:58.214746    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:58.214750    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:58 GMT
	I0926 18:15:58.214753    5496 round_trippers.go:580]     Audit-Id: 0f2b31c3-e408-46e2-9415-a4ff46f2dfac
	I0926 18:15:58.214827    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:15:58.215220    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:58.215229    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:58.215237    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:58.215241    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:58.216413    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:15:58.216421    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:58.216426    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:58.216429    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:58 GMT
	I0926 18:15:58.216432    5496 round_trippers.go:580]     Audit-Id: b4d539a1-dacc-4795-97bf-47cfcba4ec3c
	I0926 18:15:58.216435    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:58.216438    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:58.216440    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:58.216514    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1348","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5293 chars]
	I0926 18:15:58.216686    5496 pod_ready.go:103] pod "coredns-7c65d6cfc9-hxdhm" in "kube-system" namespace has status "Ready":"False"
	I0926 18:15:58.714077    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:15:58.714098    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:58.714110    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:58.714116    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:58.717289    5496 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 18:15:58.717309    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:58.717317    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:58.717322    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:58.717327    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:58 GMT
	I0926 18:15:58.717332    5496 round_trippers.go:580]     Audit-Id: 33e548d3-71b4-41b2-a1d5-adb470881de3
	I0926 18:15:58.717336    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:58.717350    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:58.717486    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:15:58.717877    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:58.717887    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:58.717895    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:58.717899    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:58.719406    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:15:58.719417    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:58.719424    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:58.719430    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:58.719436    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:58.719441    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:58 GMT
	I0926 18:15:58.719448    5496 round_trippers.go:580]     Audit-Id: df472da9-1c0c-4b68-a7c0-acf36a5fa7a6
	I0926 18:15:58.719452    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:58.719566    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1348","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5293 chars]
	I0926 18:15:59.213590    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:15:59.213613    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:59.213624    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:59.213630    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:59.215847    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:59.215863    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:59.215874    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:59 GMT
	I0926 18:15:59.215881    5496 round_trippers.go:580]     Audit-Id: 6cb8f815-7424-49a4-b9f7-da498411df5e
	I0926 18:15:59.215887    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:59.215894    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:59.215898    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:59.215902    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:59.216172    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:15:59.216558    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:59.216567    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:59.216575    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:59.216578    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:59.217991    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:15:59.217999    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:59.218003    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:59 GMT
	I0926 18:15:59.218005    5496 round_trippers.go:580]     Audit-Id: 7bb7792c-1170-4d30-a62c-11150cd8ad70
	I0926 18:15:59.218007    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:59.218009    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:59.218012    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:59.218016    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:59.218152    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:15:59.712302    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:15:59.712318    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:59.712326    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:59.712330    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:59.714530    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:15:59.714543    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:59.714549    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:59.714552    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:59.714555    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:59.714558    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:59.714560    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:59 GMT
	I0926 18:15:59.714564    5496 round_trippers.go:580]     Audit-Id: c8d73b00-d622-4d32-9315-9399cbe25354
	I0926 18:15:59.714651    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:15:59.714945    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:15:59.714952    5496 round_trippers.go:469] Request Headers:
	I0926 18:15:59.714958    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:15:59.714961    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:15:59.716160    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:15:59.716171    5496 round_trippers.go:577] Response Headers:
	I0926 18:15:59.716178    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:15:59.716185    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:15:59.716189    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:15:59 GMT
	I0926 18:15:59.716201    5496 round_trippers.go:580]     Audit-Id: d62e282e-5117-45a9-a633-026e25ffb4f7
	I0926 18:15:59.716206    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:15:59.716209    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:15:59.716316    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:16:00.212726    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:16:00.212747    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:00.212759    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:00.212766    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:00.215319    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:16:00.215332    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:00.215339    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:00 GMT
	I0926 18:16:00.215342    5496 round_trippers.go:580]     Audit-Id: 45bd5cc8-73e5-46c5-89ec-b0de7dc656dc
	I0926 18:16:00.215347    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:00.215352    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:00.215357    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:00.215361    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:00.215759    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:16:00.216046    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:16:00.216053    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:00.216059    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:00.216063    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:00.217222    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:00.217233    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:00.217240    5496 round_trippers.go:580]     Audit-Id: 545490b6-b247-40aa-8eb9-a3adb2b53791
	I0926 18:16:00.217246    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:00.217253    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:00.217256    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:00.217261    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:00.217264    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:00 GMT
	I0926 18:16:00.217457    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:16:00.217623    5496 pod_ready.go:103] pod "coredns-7c65d6cfc9-hxdhm" in "kube-system" namespace has status "Ready":"False"
	I0926 18:16:00.712324    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:16:00.712344    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:00.712355    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:00.712362    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:00.714967    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:16:00.714979    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:00.714986    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:00.714990    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:00.714993    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:00.715014    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:00 GMT
	I0926 18:16:00.715021    5496 round_trippers.go:580]     Audit-Id: ceaa57e5-97c0-4be5-9107-34df439ba6b9
	I0926 18:16:00.715025    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:00.715270    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:16:00.715655    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:16:00.715664    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:00.715672    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:00.715681    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:00.716935    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:00.716943    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:00.716948    5496 round_trippers.go:580]     Audit-Id: 9c219c44-9cd2-4dcc-a080-9754ce4c68c0
	I0926 18:16:00.716953    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:00.716957    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:00.716961    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:00.716966    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:00.716970    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:00 GMT
	I0926 18:16:00.717123    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:16:01.213999    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:16:01.214022    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:01.214032    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:01.214038    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:01.216784    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:16:01.216798    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:01.216805    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:01.216810    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:01.216814    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:01 GMT
	I0926 18:16:01.216818    5496 round_trippers.go:580]     Audit-Id: 1c32c3ec-34d9-4329-a95c-7a623e33a5e3
	I0926 18:16:01.216821    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:01.216825    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:01.216938    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:16:01.217324    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:16:01.217334    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:01.217342    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:01.217349    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:01.218688    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:01.218693    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:01.218698    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:01.218701    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:01 GMT
	I0926 18:16:01.218703    5496 round_trippers.go:580]     Audit-Id: 2b8d27b0-53cb-483a-b5e8-2427a38a3ea6
	I0926 18:16:01.218706    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:01.218713    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:01.218716    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:01.218891    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:16:01.712366    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:16:01.712386    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:01.712399    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:01.712407    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:01.715539    5496 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 18:16:01.715557    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:01.715567    5496 round_trippers.go:580]     Audit-Id: 4fcca3fb-432d-4dd9-bc80-86573dcfd1e2
	I0926 18:16:01.715573    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:01.715579    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:01.715599    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:01.715611    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:01.715620    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:01 GMT
	I0926 18:16:01.715820    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:16:01.716111    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:16:01.716117    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:01.716123    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:01.716126    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:01.717445    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:01.717453    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:01.717460    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:01.717463    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:01.717466    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:01.717469    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:01.717472    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:01 GMT
	I0926 18:16:01.717475    5496 round_trippers.go:580]     Audit-Id: 640fe6c8-1072-4c29-b5d8-0e4fe03e8745
	I0926 18:16:01.717532    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:16:02.212614    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:16:02.212644    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:02.212651    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:02.212655    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:02.214330    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:02.214343    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:02.214350    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:02.214359    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:02 GMT
	I0926 18:16:02.214363    5496 round_trippers.go:580]     Audit-Id: 70ba3b1f-8ca0-4854-96ff-a08f2bc197be
	I0926 18:16:02.214366    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:02.214368    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:02.214370    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:02.214423    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:16:02.214716    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:16:02.214723    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:02.214729    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:02.214733    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:02.216107    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:02.216116    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:02.216121    5496 round_trippers.go:580]     Audit-Id: a6b51239-30e1-4d31-b12c-c65183b73325
	I0926 18:16:02.216124    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:02.216128    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:02.216131    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:02.216134    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:02.216136    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:02 GMT
	I0926 18:16:02.216379    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:16:02.712239    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:16:02.712267    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:02.712279    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:02.712286    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:02.714720    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:16:02.714734    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:02.714742    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:02.714746    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:02.714750    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:02.714753    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:02 GMT
	I0926 18:16:02.714756    5496 round_trippers.go:580]     Audit-Id: 9596591f-7f0b-4129-9236-d5093a1455af
	I0926 18:16:02.714760    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:02.715017    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:16:02.715391    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:16:02.715407    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:02.715415    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:02.715419    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:02.717053    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:02.717060    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:02.717066    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:02.717069    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:02.717088    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:02 GMT
	I0926 18:16:02.717091    5496 round_trippers.go:580]     Audit-Id: ba065de0-a695-46d7-a843-1f2af8257246
	I0926 18:16:02.717094    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:02.717097    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:02.717301    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:16:02.717470    5496 pod_ready.go:103] pod "coredns-7c65d6cfc9-hxdhm" in "kube-system" namespace has status "Ready":"False"
	I0926 18:16:03.212525    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:16:03.212541    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:03.212550    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:03.212555    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:03.214720    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:16:03.214732    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:03.214738    5496 round_trippers.go:580]     Audit-Id: a79b4b2f-5f5c-493b-93c9-ec1ff1cdb6d6
	I0926 18:16:03.214741    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:03.214758    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:03.214760    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:03.214764    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:03.214766    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:03 GMT
	I0926 18:16:03.214817    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:16:03.215162    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:16:03.215169    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:03.215174    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:03.215178    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:03.216455    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:03.216464    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:03.216469    5496 round_trippers.go:580]     Audit-Id: 1c3690bb-9a1a-4c5d-b47d-8a23141028a8
	I0926 18:16:03.216490    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:03.216494    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:03.216496    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:03.216499    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:03.216501    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:03 GMT
	I0926 18:16:03.216563    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:16:03.712497    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:16:03.712520    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:03.712548    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:03.712561    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:03.714758    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:16:03.714768    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:03.714773    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:03.714776    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:03.714778    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:03.714781    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:03.714784    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:03 GMT
	I0926 18:16:03.714786    5496 round_trippers.go:580]     Audit-Id: b20377b1-152f-4b0a-97fa-33cb3f196e68
	I0926 18:16:03.714846    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:16:03.715145    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:16:03.715152    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:03.715157    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:03.715160    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:03.716631    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:03.716641    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:03.716647    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:03 GMT
	I0926 18:16:03.716654    5496 round_trippers.go:580]     Audit-Id: 2d77e054-e393-47e1-b6c0-a85a653e5fa8
	I0926 18:16:03.716658    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:03.716660    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:03.716662    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:03.716666    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:03.716913    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:16:04.213711    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:16:04.213738    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:04.213749    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:04.213756    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:04.216728    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:16:04.216743    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:04.216750    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:04 GMT
	I0926 18:16:04.216754    5496 round_trippers.go:580]     Audit-Id: 3856faf2-d665-4ba5-814f-a001bd910e14
	I0926 18:16:04.216758    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:04.216763    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:04.216766    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:04.216769    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:04.216851    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:16:04.217224    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:16:04.217233    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:04.217240    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:04.217248    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:04.218633    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:04.218643    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:04.218648    5496 round_trippers.go:580]     Audit-Id: 3f1bb779-cd1d-4b2b-bf1b-44cef6ce1444
	I0926 18:16:04.218650    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:04.218653    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:04.218656    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:04.218659    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:04.218661    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:04 GMT
	I0926 18:16:04.218780    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:16:04.712790    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:16:04.712847    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:04.712874    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:04.712882    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:04.715348    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:16:04.715376    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:04.715392    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:04 GMT
	I0926 18:16:04.715403    5496 round_trippers.go:580]     Audit-Id: 381870e6-411f-4bc7-a5b2-06f3ab0df741
	I0926 18:16:04.715415    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:04.715422    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:04.715427    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:04.715432    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:04.715603    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:16:04.715927    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:16:04.715933    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:04.715938    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:04.715945    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:04.717434    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:04.717443    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:04.717447    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:04.717450    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:04.717453    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:04.717455    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:04.717458    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:04 GMT
	I0926 18:16:04.717463    5496 round_trippers.go:580]     Audit-Id: 13275a57-33fb-4475-8904-a0cf82b08de6
	I0926 18:16:04.717520    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:16:04.717697    5496 pod_ready.go:103] pod "coredns-7c65d6cfc9-hxdhm" in "kube-system" namespace has status "Ready":"False"
	I0926 18:16:05.212553    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:16:05.212573    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:05.212585    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:05.212592    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:05.214968    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:16:05.214980    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:05.214986    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:05.214989    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:05.214992    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:05.214995    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:05.214997    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:05 GMT
	I0926 18:16:05.214999    5496 round_trippers.go:580]     Audit-Id: 471fd1c1-7c8b-481e-b842-7bd63ef96a20
	I0926 18:16:05.215092    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:16:05.215402    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:16:05.215409    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:05.215415    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:05.215419    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:05.216818    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:05.216826    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:05.216833    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:05.216864    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:05.216871    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:05 GMT
	I0926 18:16:05.216882    5496 round_trippers.go:580]     Audit-Id: 69f287ee-6cd6-4907-9920-6771d90d68cf
	I0926 18:16:05.216886    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:05.216890    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:05.217004    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:16:05.713284    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:16:05.713313    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:05.713325    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:05.713330    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:05.716123    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:16:05.716138    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:05.716145    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:05.716150    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:05.716153    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:05 GMT
	I0926 18:16:05.716156    5496 round_trippers.go:580]     Audit-Id: c236a70d-dca2-43ed-88fc-5879c4da6276
	I0926 18:16:05.716159    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:05.716162    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:05.716608    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:16:05.716988    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:16:05.716998    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:05.717005    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:05.717009    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:05.718413    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:05.718421    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:05.718427    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:05.718432    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:05 GMT
	I0926 18:16:05.718436    5496 round_trippers.go:580]     Audit-Id: e2bebbe0-f138-4103-9234-96d5b4493142
	I0926 18:16:05.718440    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:05.718445    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:05.718448    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:05.718652    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:16:06.212124    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:16:06.212144    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:06.212152    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:06.212160    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:06.214150    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:06.214163    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:06.214171    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:06.214187    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:06.214190    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:06 GMT
	I0926 18:16:06.214193    5496 round_trippers.go:580]     Audit-Id: 4c81e4b1-7637-4522-88a5-35994488ee60
	I0926 18:16:06.214195    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:06.214198    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:06.214456    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:16:06.214738    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:16:06.214745    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:06.214751    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:06.214755    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:06.215721    5496 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0926 18:16:06.215729    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:06.215734    5496 round_trippers.go:580]     Audit-Id: 6793a338-0f0b-40ac-808c-7c730fdaa921
	I0926 18:16:06.215737    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:06.215740    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:06.215742    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:06.215745    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:06.215752    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:06 GMT
	I0926 18:16:06.215916    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:16:06.712574    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:16:06.712599    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:06.712611    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:06.712618    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:06.715372    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:16:06.715388    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:06.715394    5496 round_trippers.go:580]     Audit-Id: 964daf26-8394-4eab-82c8-79eaacdbb111
	I0926 18:16:06.715397    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:06.715402    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:06.715406    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:06.715410    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:06.715419    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:06 GMT
	I0926 18:16:06.715499    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:16:06.715868    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:16:06.715878    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:06.715885    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:06.715892    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:06.717615    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:06.717622    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:06.717628    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:06.717630    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:06.717633    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:06.717636    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:06 GMT
	I0926 18:16:06.717639    5496 round_trippers.go:580]     Audit-Id: ccb2dc73-d30a-4c73-87fa-54f1870594a0
	I0926 18:16:06.717642    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:06.717718    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:16:06.717892    5496 pod_ready.go:103] pod "coredns-7c65d6cfc9-hxdhm" in "kube-system" namespace has status "Ready":"False"
	I0926 18:16:07.212618    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:16:07.212639    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:07.212651    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:07.212657    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:07.215738    5496 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 18:16:07.215752    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:07.215760    5496 round_trippers.go:580]     Audit-Id: 8caa92ad-c5fe-4849-92b3-734aa6eb01e5
	I0926 18:16:07.215764    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:07.215767    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:07.215771    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:07.215791    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:07.215794    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:07 GMT
	I0926 18:16:07.216121    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:16:07.216498    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:16:07.216509    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:07.216516    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:07.216527    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:07.217819    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:07.217827    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:07.217832    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:07.217835    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:07.217849    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:07.217857    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:07.217861    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:07 GMT
	I0926 18:16:07.217865    5496 round_trippers.go:580]     Audit-Id: d1de1a1f-5441-4b91-9eb9-edf61f425c09
	I0926 18:16:07.217973    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:16:07.712067    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:16:07.712083    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:07.712090    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:07.712094    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:07.714031    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:07.714042    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:07.714050    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:07.714056    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:07.714062    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:07 GMT
	I0926 18:16:07.714065    5496 round_trippers.go:580]     Audit-Id: 74fd7ec4-593c-47d4-add6-3b619a16e4ea
	I0926 18:16:07.714069    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:07.714073    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:07.714333    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:16:07.714638    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:16:07.714645    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:07.714650    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:07.714654    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:07.716665    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:16:07.716675    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:07.716681    5496 round_trippers.go:580]     Audit-Id: c0be86c7-e109-45fa-acbb-fc3af4ede8f7
	I0926 18:16:07.716685    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:07.716690    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:07.716693    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:07.716696    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:07.716699    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:07 GMT
	I0926 18:16:07.716889    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:16:08.212879    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:16:08.212895    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:08.212903    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:08.212915    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:08.215000    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:16:08.215010    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:08.215015    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:08.215019    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:08 GMT
	I0926 18:16:08.215023    5496 round_trippers.go:580]     Audit-Id: 45b134b7-7b5e-4989-947a-6d2e367bc761
	I0926 18:16:08.215027    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:08.215029    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:08.215032    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:08.215271    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1208","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0926 18:16:08.215565    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:16:08.215572    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:08.215578    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:08.215581    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:08.216758    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:08.216765    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:08.216769    5496 round_trippers.go:580]     Audit-Id: 926a702c-fc90-4baf-93f6-eff541526f7c
	I0926 18:16:08.216772    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:08.216775    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:08.216777    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:08.216779    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:08.216785    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:08 GMT
	I0926 18:16:08.216902    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:16:08.712112    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hxdhm
	I0926 18:16:08.712134    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:08.712143    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:08.712147    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:08.714246    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:16:08.714258    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:08.714266    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:08.714270    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:08.714272    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:08.714274    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:08 GMT
	I0926 18:16:08.714305    5496 round_trippers.go:580]     Audit-Id: e431404e-082c-4bff-af26-2800dd810ef0
	I0926 18:16:08.714312    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:08.714686    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1374","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7040 chars]
	I0926 18:16:08.715136    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:16:08.715143    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:08.715163    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:08.715166    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:08.716625    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:08.716633    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:08.716637    5496 round_trippers.go:580]     Audit-Id: 82b0e256-e8f3-4136-b015-289d803762d8
	I0926 18:16:08.716640    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:08.716643    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:08.716646    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:08.716649    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:08.716652    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:08 GMT
	I0926 18:16:08.716724    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:16:08.716964    5496 pod_ready.go:93] pod "coredns-7c65d6cfc9-hxdhm" in "kube-system" namespace has status "Ready":"True"
	I0926 18:16:08.716986    5496 pod_ready.go:82] duration metric: took 12.505026877s for pod "coredns-7c65d6cfc9-hxdhm" in "kube-system" namespace to be "Ready" ...
	I0926 18:16:08.716993    5496 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-108000" in "kube-system" namespace to be "Ready" ...
	I0926 18:16:08.717042    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-108000
	I0926 18:16:08.717048    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:08.717053    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:08.717057    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:08.718219    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:08.718226    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:08.718231    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:08.718234    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:08.718251    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:08.718259    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:08 GMT
	I0926 18:16:08.718263    5496 round_trippers.go:580]     Audit-Id: 6d5f2ed6-1617-40cb-bf5c-402f0d5297ac
	I0926 18:16:08.718271    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:08.718394    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-108000","namespace":"kube-system","uid":"2a5e99f4-416d-4d75-acd2-33231f5f780d","resourceVersion":"1339","creationTimestamp":"2024-09-27T01:08:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.14:2379","kubernetes.io/config.hash":"416a07783603924d51ef5ca1abc7c318","kubernetes.io/config.mirror":"416a07783603924d51ef5ca1abc7c318","kubernetes.io/config.seen":"2024-09-27T01:08:53.027445649Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6664 chars]
	I0926 18:16:08.718621    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:16:08.718632    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:08.718639    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:08.718641    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:08.719690    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:08.719696    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:08.719701    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:08.719705    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:08.719708    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:08 GMT
	I0926 18:16:08.719711    5496 round_trippers.go:580]     Audit-Id: 43ab5724-617c-4514-a5bb-167d63692c64
	I0926 18:16:08.719713    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:08.719716    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:08.719869    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:16:08.720037    5496 pod_ready.go:93] pod "etcd-multinode-108000" in "kube-system" namespace has status "Ready":"True"
	I0926 18:16:08.720045    5496 pod_ready.go:82] duration metric: took 3.033233ms for pod "etcd-multinode-108000" in "kube-system" namespace to be "Ready" ...
	I0926 18:16:08.720056    5496 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-108000" in "kube-system" namespace to be "Ready" ...
	I0926 18:16:08.720092    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-108000
	I0926 18:16:08.720097    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:08.720102    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:08.720106    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:08.721268    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:08.721274    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:08.721279    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:08.721281    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:08 GMT
	I0926 18:16:08.721285    5496 round_trippers.go:580]     Audit-Id: 3b3dc1f7-a39b-4e8b-a18f-b508b0ba1b76
	I0926 18:16:08.721288    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:08.721290    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:08.721292    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:08.721466    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-108000","namespace":"kube-system","uid":"b8011715-128c-4dfc-94b7-cc9c04907c8a","resourceVersion":"1324","creationTimestamp":"2024-09-27T01:08:53Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.14:8443","kubernetes.io/config.hash":"3b2e0fdf454135a81bc6cacb88271d66","kubernetes.io/config.mirror":"3b2e0fdf454135a81bc6cacb88271d66","kubernetes.io/config.seen":"2024-09-27T01:08:53.027447712Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7892 chars]
	I0926 18:16:08.721703    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:16:08.721709    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:08.721715    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:08.721718    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:08.722770    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:08.722783    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:08.722788    5496 round_trippers.go:580]     Audit-Id: 2ddd8768-a8e3-4ead-9322-d4bd19be6dac
	I0926 18:16:08.722792    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:08.722794    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:08.722797    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:08.722801    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:08.722804    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:08 GMT
	I0926 18:16:08.723100    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:16:08.723275    5496 pod_ready.go:93] pod "kube-apiserver-multinode-108000" in "kube-system" namespace has status "Ready":"True"
	I0926 18:16:08.723282    5496 pod_ready.go:82] duration metric: took 3.221367ms for pod "kube-apiserver-multinode-108000" in "kube-system" namespace to be "Ready" ...
	I0926 18:16:08.723288    5496 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-108000" in "kube-system" namespace to be "Ready" ...
	I0926 18:16:08.723319    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-108000
	I0926 18:16:08.723324    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:08.723329    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:08.723332    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:08.724429    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:08.724436    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:08.724441    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:08.724456    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:08.724462    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:08.724470    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:08.724473    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:08 GMT
	I0926 18:16:08.724475    5496 round_trippers.go:580]     Audit-Id: 935f71ba-741e-4b6c-baa8-8880af499c49
	I0926 18:16:08.724752    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-108000","namespace":"kube-system","uid":"42fac17d-5eda-41e8-8747-902b605e747f","resourceVersion":"1343","creationTimestamp":"2024-09-27T01:08:53Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fec5fbfbd6a0fb8784a74d22da6a6ca2","kubernetes.io/config.mirror":"fec5fbfbd6a0fb8784a74d22da6a6ca2","kubernetes.io/config.seen":"2024-09-27T01:08:53.027448437Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7465 chars]
	I0926 18:16:08.724969    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:16:08.724975    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:08.724980    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:08.724985    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:08.726136    5496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0926 18:16:08.726143    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:08.726149    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:08.726155    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:08.726160    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:08.726171    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:08.726173    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:08 GMT
	I0926 18:16:08.726176    5496 round_trippers.go:580]     Audit-Id: c1b31b81-6e52-41b9-82f9-a973a2ea460a
	I0926 18:16:08.726293    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:16:08.726466    5496 pod_ready.go:93] pod "kube-controller-manager-multinode-108000" in "kube-system" namespace has status "Ready":"True"
	I0926 18:16:08.726474    5496 pod_ready.go:82] duration metric: took 3.181952ms for pod "kube-controller-manager-multinode-108000" in "kube-system" namespace to be "Ready" ...
	I0926 18:16:08.726481    5496 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9kjdl" in "kube-system" namespace to be "Ready" ...
	I0926 18:16:08.726520    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9kjdl
	I0926 18:16:08.726525    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:08.726530    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:08.726534    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:08.727483    5496 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0926 18:16:08.727490    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:08.727495    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:08 GMT
	I0926 18:16:08.727508    5496 round_trippers.go:580]     Audit-Id: 8925fcb2-83f8-4fc0-b6c2-47cfb2296bdd
	I0926 18:16:08.727514    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:08.727517    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:08.727520    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:08.727522    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:08.727634    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9kjdl","generateName":"kube-proxy-","namespace":"kube-system","uid":"979606a2-6bc4-46c0-8333-000bc25722f3","resourceVersion":"1316","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ec618504-e724-4298-bf49-56d9dcae8b25","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec618504-e724-4298-bf49-56d9dcae8b25\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6395 chars]
	I0926 18:16:08.727865    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:16:08.727872    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:08.727877    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:08.727880    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:08.728757    5496 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0926 18:16:08.728765    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:08.728772    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:08.728777    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:08.728791    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:08.728796    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:08 GMT
	I0926 18:16:08.728799    5496 round_trippers.go:580]     Audit-Id: 18a998c6-a62e-431e-a427-1d957ad8d6a5
	I0926 18:16:08.728801    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:08.728892    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:16:08.729055    5496 pod_ready.go:93] pod "kube-proxy-9kjdl" in "kube-system" namespace has status "Ready":"True"
	I0926 18:16:08.729063    5496 pod_ready.go:82] duration metric: took 2.576896ms for pod "kube-proxy-9kjdl" in "kube-system" namespace to be "Ready" ...
	I0926 18:16:08.729068    5496 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ngs2x" in "kube-system" namespace to be "Ready" ...
	I0926 18:16:08.913415    5496 request.go:632] Waited for 184.23754ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ngs2x
	I0926 18:16:08.913463    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ngs2x
	I0926 18:16:08.913471    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:08.913486    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:08.913496    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:08.916020    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:16:08.916036    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:08.916043    5496 round_trippers.go:580]     Audit-Id: 2a06992a-a8b1-4882-a62c-ae06d1490485
	I0926 18:16:08.916048    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:08.916051    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:08.916054    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:08.916076    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:08.916083    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:09 GMT
	I0926 18:16:08.916253    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ngs2x","generateName":"kube-proxy-","namespace":"kube-system","uid":"f95c0316-b4a8-4f0c-a90b-a88af50fbc68","resourceVersion":"1040","creationTimestamp":"2024-09-27T01:09:40Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ec618504-e724-4298-bf49-56d9dcae8b25","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:09:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec618504-e724-4298-bf49-56d9dcae8b25\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6198 chars]
	I0926 18:16:09.114275    5496 request.go:632] Waited for 197.563469ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-108000-m02
	I0926 18:16:09.114347    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000-m02
	I0926 18:16:09.114356    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:09.114369    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:09.114376    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:09.116858    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:16:09.116875    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:09.116884    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:09.116905    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:09.116911    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:09 GMT
	I0926 18:16:09.116915    5496 round_trippers.go:580]     Audit-Id: 61c040fa-5156-4d6f-ae55-9bf815c5c22a
	I0926 18:16:09.116918    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:09.116921    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:09.117118    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000-m02","uid":"653db940-78e0-431e-befd-25309d2a6cc8","resourceVersion":"1071","creationTimestamp":"2024-09-27T01:13:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_26T18_13_29_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:13:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3812 chars]
	I0926 18:16:09.117352    5496 pod_ready.go:93] pod "kube-proxy-ngs2x" in "kube-system" namespace has status "Ready":"True"
	I0926 18:16:09.117363    5496 pod_ready.go:82] duration metric: took 388.284535ms for pod "kube-proxy-ngs2x" in "kube-system" namespace to be "Ready" ...
	I0926 18:16:09.117372    5496 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pwrqj" in "kube-system" namespace to be "Ready" ...
	I0926 18:16:09.313915    5496 request.go:632] Waited for 196.494719ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pwrqj
	I0926 18:16:09.314006    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pwrqj
	I0926 18:16:09.314017    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:09.314031    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:09.314039    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:09.317214    5496 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 18:16:09.317231    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:09.317238    5496 round_trippers.go:580]     Audit-Id: 1320ad3b-29df-4788-b7ef-2e12a77ead86
	I0926 18:16:09.317243    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:09.317246    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:09.317249    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:09.317253    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:09.317257    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:09 GMT
	I0926 18:16:09.317416    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pwrqj","generateName":"kube-proxy-","namespace":"kube-system","uid":"dfc98f0e-705d-41fd-a871-9d4f8455b11d","resourceVersion":"1158","creationTimestamp":"2024-09-27T01:10:31Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ec618504-e724-4298-bf49-56d9dcae8b25","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:10:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec618504-e724-4298-bf49-56d9dcae8b25\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6198 chars]
	I0926 18:16:09.513510    5496 request.go:632] Waited for 195.70342ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-108000-m03
	I0926 18:16:09.513623    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000-m03
	I0926 18:16:09.513636    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:09.513648    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:09.513655    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:09.516445    5496 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0926 18:16:09.516461    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:09.516469    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:09.516473    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:09.516486    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:09.516490    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:09.516495    5496 round_trippers.go:580]     Content-Length: 210
	I0926 18:16:09.516499    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:09 GMT
	I0926 18:16:09.516503    5496 round_trippers.go:580]     Audit-Id: b84c6d4d-7ff7-40c3-a888-51c82f59b474
	I0926 18:16:09.516522    5496 request.go:1351] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-108000-m03\" not found","reason":"NotFound","details":{"name":"multinode-108000-m03","kind":"nodes"},"code":404}
	I0926 18:16:09.516586    5496 pod_ready.go:98] node "multinode-108000-m03" hosting pod "kube-proxy-pwrqj" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-108000-m03": nodes "multinode-108000-m03" not found
	I0926 18:16:09.516599    5496 pod_ready.go:82] duration metric: took 399.215483ms for pod "kube-proxy-pwrqj" in "kube-system" namespace to be "Ready" ...
	E0926 18:16:09.516607    5496 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-108000-m03" hosting pod "kube-proxy-pwrqj" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-108000-m03": nodes "multinode-108000-m03" not found
	I0926 18:16:09.516614    5496 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-108000" in "kube-system" namespace to be "Ready" ...
	I0926 18:16:09.713207    5496 request.go:632] Waited for 196.494898ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-108000
	I0926 18:16:09.713288    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-108000
	I0926 18:16:09.713301    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:09.713317    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:09.713326    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:09.716159    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:16:09.716177    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:09.716184    5496 round_trippers.go:580]     Audit-Id: 3e9f3964-2f01-4f4c-866a-5e4f7f2fe5d2
	I0926 18:16:09.716190    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:09.716193    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:09.716197    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:09.716220    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:09.716228    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:09 GMT
	I0926 18:16:09.716357    5496 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-108000","namespace":"kube-system","uid":"e5b482e0-154d-4620-8f24-1ebf181b9c1b","resourceVersion":"1335","creationTimestamp":"2024-09-27T01:08:53Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"40cf241c42ae492b94bc92cec52f27f4","kubernetes.io/config.mirror":"40cf241c42ae492b94bc92cec52f27f4","kubernetes.io/config.seen":"2024-09-27T01:08:53.027449029Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5195 chars]
	I0926 18:16:09.913068    5496 request.go:632] Waited for 196.314136ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:16:09.913119    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-108000
	I0926 18:16:09.913128    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:09.913170    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:09.913181    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:09.915980    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:16:09.915994    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:09.916002    5496 round_trippers.go:580]     Audit-Id: 9a9c499c-1a25-438c-9be3-9aa603be7aa5
	I0926 18:16:09.916006    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:09.916035    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:09.916048    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:09.916051    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:09.916056    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:10 GMT
	I0926 18:16:09.916155    5496 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-27T01:08:50Z","fieldsType":"FieldsV1","f [truncated 5173 chars]
	I0926 18:16:09.916423    5496 pod_ready.go:93] pod "kube-scheduler-multinode-108000" in "kube-system" namespace has status "Ready":"True"
	I0926 18:16:09.916434    5496 pod_ready.go:82] duration metric: took 399.807036ms for pod "kube-scheduler-multinode-108000" in "kube-system" namespace to be "Ready" ...
	I0926 18:16:09.916443    5496 pod_ready.go:39] duration metric: took 13.709820236s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0926 18:16:09.916458    5496 api_server.go:52] waiting for apiserver process to appear ...
	I0926 18:16:09.916533    5496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 18:16:09.929940    5496 command_runner.go:130] > 1714
	I0926 18:16:09.929978    5496 api_server.go:72] duration metric: took 30.506092435s to wait for apiserver process to appear ...
	I0926 18:16:09.929985    5496 api_server.go:88] waiting for apiserver healthz status ...
	I0926 18:16:09.929997    5496 api_server.go:253] Checking apiserver healthz at https://192.169.0.14:8443/healthz ...
	I0926 18:16:09.933731    5496 api_server.go:279] https://192.169.0.14:8443/healthz returned 200:
	ok
	I0926 18:16:09.933761    5496 round_trippers.go:463] GET https://192.169.0.14:8443/version
	I0926 18:16:09.933767    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:09.933772    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:09.933777    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:09.934445    5496 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0926 18:16:09.934453    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:09.934458    5496 round_trippers.go:580]     Content-Length: 263
	I0926 18:16:09.934461    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:10 GMT
	I0926 18:16:09.934465    5496 round_trippers.go:580]     Audit-Id: 5c340580-5c39-47b5-a356-133075a6df60
	I0926 18:16:09.934468    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:09.934470    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:09.934474    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:09.934476    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:09.934520    5496 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.1",
	  "gitCommit": "948afe5ca072329a73c8e79ed5938717a5cb3d21",
	  "gitTreeState": "clean",
	  "buildDate": "2024-09-11T21:22:08Z",
	  "goVersion": "go1.22.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0926 18:16:09.934547    5496 api_server.go:141] control plane version: v1.31.1
	I0926 18:16:09.934558    5496 api_server.go:131] duration metric: took 4.566184ms to wait for apiserver health ...
	I0926 18:16:09.934564    5496 system_pods.go:43] waiting for kube-system pods to appear ...
	I0926 18:16:10.113446    5496 request.go:632] Waited for 178.818116ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods
	I0926 18:16:10.113564    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods
	I0926 18:16:10.113575    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:10.113586    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:10.113597    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:10.117452    5496 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 18:16:10.117472    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:10.117483    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:10.117501    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:10.117508    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:10.117516    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:10 GMT
	I0926 18:16:10.117523    5496 round_trippers.go:580]     Audit-Id: a1510877-fe88-4671-851a-7550c754986d
	I0926 18:16:10.117529    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:10.118591    5496 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1378"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1374","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 89336 chars]
	I0926 18:16:10.120628    5496 system_pods.go:59] 12 kube-system pods found
	I0926 18:16:10.120638    5496 system_pods.go:61] "coredns-7c65d6cfc9-hxdhm" [ff9bbfa0-9278-44d7-abc5-7a38ed77ce23] Running
	I0926 18:16:10.120642    5496 system_pods.go:61] "etcd-multinode-108000" [2a5e99f4-416d-4d75-acd2-33231f5f780d] Running
	I0926 18:16:10.120645    5496 system_pods.go:61] "kindnet-ktwmw" [5065643a-e9ee-44a6-a05d-b9154074dd84] Running
	I0926 18:16:10.120651    5496 system_pods.go:61] "kindnet-qlv2x" [08c7f9d2-c689-40b5-95fc-a48157150778] Running
	I0926 18:16:10.120655    5496 system_pods.go:61] "kindnet-wbk29" [a9ff7c3f-b5e1-40e5-ab9d-a38e2696988f] Running
	I0926 18:16:10.120658    5496 system_pods.go:61] "kube-apiserver-multinode-108000" [b8011715-128c-4dfc-94b7-cc9c04907c8a] Running
	I0926 18:16:10.120662    5496 system_pods.go:61] "kube-controller-manager-multinode-108000" [42fac17d-5eda-41e8-8747-902b605e747f] Running
	I0926 18:16:10.120664    5496 system_pods.go:61] "kube-proxy-9kjdl" [979606a2-6bc4-46c0-8333-000bc25722f3] Running
	I0926 18:16:10.120667    5496 system_pods.go:61] "kube-proxy-ngs2x" [f95c0316-b4a8-4f0c-a90b-a88af50fbc68] Running
	I0926 18:16:10.120669    5496 system_pods.go:61] "kube-proxy-pwrqj" [dfc98f0e-705d-41fd-a871-9d4f8455b11d] Running
	I0926 18:16:10.120672    5496 system_pods.go:61] "kube-scheduler-multinode-108000" [e5b482e0-154d-4620-8f24-1ebf181b9c1b] Running
	I0926 18:16:10.120676    5496 system_pods.go:61] "storage-provisioner" [e67377e5-f7c5-4625-9739-3703de1f4739] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0926 18:16:10.120681    5496 system_pods.go:74] duration metric: took 186.111068ms to wait for pod list to return data ...
	I0926 18:16:10.120686    5496 default_sa.go:34] waiting for default service account to be created ...
	I0926 18:16:10.312196    5496 request.go:632] Waited for 191.450108ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/default/serviceaccounts
	I0926 18:16:10.312274    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/default/serviceaccounts
	I0926 18:16:10.312282    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:10.312289    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:10.312293    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:10.314931    5496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0926 18:16:10.314940    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:10.314945    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:10.314950    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:10.314952    5496 round_trippers.go:580]     Content-Length: 262
	I0926 18:16:10.314955    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:10 GMT
	I0926 18:16:10.314958    5496 round_trippers.go:580]     Audit-Id: 0579afb4-d182-49f3-824c-63d92338701e
	I0926 18:16:10.314970    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:10.314973    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:10.314983    5496 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1378"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"2124ff28-6fda-431f-9782-123cd032ca69","resourceVersion":"363","creationTimestamp":"2024-09-27T01:08:58Z"}}]}
	I0926 18:16:10.315096    5496 default_sa.go:45] found service account: "default"
	I0926 18:16:10.315105    5496 default_sa.go:55] duration metric: took 194.411667ms for default service account to be created ...
	I0926 18:16:10.315129    5496 system_pods.go:116] waiting for k8s-apps to be running ...
	I0926 18:16:10.512752    5496 request.go:632] Waited for 197.563272ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods
	I0926 18:16:10.512887    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods
	I0926 18:16:10.512898    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:10.512909    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:10.512915    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:10.516171    5496 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 18:16:10.516188    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:10.516196    5496 round_trippers.go:580]     Audit-Id: ea9e0f75-fb7e-41d9-98f6-04bd29f02b8d
	I0926 18:16:10.516203    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:10.516208    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:10.516213    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:10.516218    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:10.516223    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:10 GMT
	I0926 18:16:10.517179    5496 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1378"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-hxdhm","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"ff9bbfa0-9278-44d7-abc5-7a38ed77ce23","resourceVersion":"1374","creationTimestamp":"2024-09-27T01:08:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"87cfe77a-2b7d-40a2-828f-dd9ab90ce247","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-27T01:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87cfe77a-2b7d-40a2-828f-dd9ab90ce247\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 89336 chars]
	I0926 18:16:10.519162    5496 system_pods.go:86] 12 kube-system pods found
	I0926 18:16:10.519173    5496 system_pods.go:89] "coredns-7c65d6cfc9-hxdhm" [ff9bbfa0-9278-44d7-abc5-7a38ed77ce23] Running
	I0926 18:16:10.519177    5496 system_pods.go:89] "etcd-multinode-108000" [2a5e99f4-416d-4d75-acd2-33231f5f780d] Running
	I0926 18:16:10.519185    5496 system_pods.go:89] "kindnet-ktwmw" [5065643a-e9ee-44a6-a05d-b9154074dd84] Running
	I0926 18:16:10.519189    5496 system_pods.go:89] "kindnet-qlv2x" [08c7f9d2-c689-40b5-95fc-a48157150778] Running
	I0926 18:16:10.519192    5496 system_pods.go:89] "kindnet-wbk29" [a9ff7c3f-b5e1-40e5-ab9d-a38e2696988f] Running
	I0926 18:16:10.519195    5496 system_pods.go:89] "kube-apiserver-multinode-108000" [b8011715-128c-4dfc-94b7-cc9c04907c8a] Running
	I0926 18:16:10.519198    5496 system_pods.go:89] "kube-controller-manager-multinode-108000" [42fac17d-5eda-41e8-8747-902b605e747f] Running
	I0926 18:16:10.519201    5496 system_pods.go:89] "kube-proxy-9kjdl" [979606a2-6bc4-46c0-8333-000bc25722f3] Running
	I0926 18:16:10.519204    5496 system_pods.go:89] "kube-proxy-ngs2x" [f95c0316-b4a8-4f0c-a90b-a88af50fbc68] Running
	I0926 18:16:10.519208    5496 system_pods.go:89] "kube-proxy-pwrqj" [dfc98f0e-705d-41fd-a871-9d4f8455b11d] Running
	I0926 18:16:10.519212    5496 system_pods.go:89] "kube-scheduler-multinode-108000" [e5b482e0-154d-4620-8f24-1ebf181b9c1b] Running
	I0926 18:16:10.519216    5496 system_pods.go:89] "storage-provisioner" [e67377e5-f7c5-4625-9739-3703de1f4739] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0926 18:16:10.519222    5496 system_pods.go:126] duration metric: took 204.085161ms to wait for k8s-apps to be running ...
	I0926 18:16:10.519230    5496 system_svc.go:44] waiting for kubelet service to be running ....
	I0926 18:16:10.519290    5496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 18:16:10.531238    5496 system_svc.go:56] duration metric: took 12.005812ms WaitForService to wait for kubelet
	I0926 18:16:10.531252    5496 kubeadm.go:582] duration metric: took 31.107358282s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 18:16:10.531263    5496 node_conditions.go:102] verifying NodePressure condition ...
	I0926 18:16:10.712625    5496 request.go:632] Waited for 181.265839ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes
	I0926 18:16:10.712727    5496 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes
	I0926 18:16:10.712739    5496 round_trippers.go:469] Request Headers:
	I0926 18:16:10.712750    5496 round_trippers.go:473]     Accept: application/json, */*
	I0926 18:16:10.712759    5496 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0926 18:16:10.716020    5496 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0926 18:16:10.716036    5496 round_trippers.go:577] Response Headers:
	I0926 18:16:10.716043    5496 round_trippers.go:580]     Date: Fri, 27 Sep 2024 01:16:10 GMT
	I0926 18:16:10.716046    5496 round_trippers.go:580]     Audit-Id: ede53581-e306-437d-9089-b442b44b2546
	I0926 18:16:10.716050    5496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0926 18:16:10.716053    5496 round_trippers.go:580]     Content-Type: application/json
	I0926 18:16:10.716056    5496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0d9ffd9f-5b11-4d33-9fce-f46729a44f69
	I0926 18:16:10.716060    5496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 98724cca-9dbb-4df0-82e8-c1537b78b261
	I0926 18:16:10.716193    5496 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1378"},"items":[{"metadata":{"name":"multinode-108000","uid":"b7e84598-ef29-43f0-821a-fc5857e700b0","resourceVersion":"1351","creationTimestamp":"2024-09-27T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-108000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eee16a295c071ed5a0e96cbbc00bcd13b2654625","minikube.k8s.io/name":"multinode-108000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_26T18_08_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 10031 chars]
	I0926 18:16:10.716590    5496 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0926 18:16:10.716601    5496 node_conditions.go:123] node cpu capacity is 2
	I0926 18:16:10.716610    5496 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0926 18:16:10.716624    5496 node_conditions.go:123] node cpu capacity is 2
	I0926 18:16:10.716630    5496 node_conditions.go:105] duration metric: took 185.361562ms to run NodePressure ...
	I0926 18:16:10.716640    5496 start.go:241] waiting for startup goroutines ...
	I0926 18:16:10.716648    5496 start.go:246] waiting for cluster config update ...
	I0926 18:16:10.716656    5496 start.go:255] writing updated cluster config ...
	I0926 18:16:10.740326    5496 out.go:201] 
	I0926 18:16:10.762941    5496 config.go:182] Loaded profile config "multinode-108000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 18:16:10.763067    5496 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/multinode-108000/config.json ...
	I0926 18:16:10.785642    5496 out.go:177] * Starting "multinode-108000-m02" worker node in "multinode-108000" cluster
	I0926 18:16:10.828316    5496 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 18:16:10.828348    5496 cache.go:56] Caching tarball of preloaded images
	I0926 18:16:10.828562    5496 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0926 18:16:10.828582    5496 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 18:16:10.828729    5496 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/multinode-108000/config.json ...
	I0926 18:16:10.829651    5496 start.go:360] acquireMachinesLock for multinode-108000-m02: {Name:mk62b0ec9b6170d1971239573141a625c4a2621e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:16:10.829736    5496 start.go:364] duration metric: took 66.242µs to acquireMachinesLock for "multinode-108000-m02"
	I0926 18:16:10.829755    5496 start.go:96] Skipping create...Using existing machine configuration
	I0926 18:16:10.829761    5496 fix.go:54] fixHost starting: m02
	I0926 18:16:10.830111    5496 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 18:16:10.830139    5496 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 18:16:10.839542    5496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53395
	I0926 18:16:10.840007    5496 main.go:141] libmachine: () Calling .GetVersion
	I0926 18:16:10.840355    5496 main.go:141] libmachine: Using API Version  1
	I0926 18:16:10.840367    5496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 18:16:10.840583    5496 main.go:141] libmachine: () Calling .GetMachineName
	I0926 18:16:10.840708    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .DriverName
	I0926 18:16:10.840795    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetState
	I0926 18:16:10.840893    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:16:10.840974    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | hyperkit pid from json: 5421
	I0926 18:16:10.841906    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | hyperkit pid 5421 missing from process table
	I0926 18:16:10.841940    5496 fix.go:112] recreateIfNeeded on multinode-108000-m02: state=Stopped err=<nil>
	I0926 18:16:10.841948    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .DriverName
	W0926 18:16:10.842035    5496 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 18:16:10.884340    5496 out.go:177] * Restarting existing hyperkit VM for "multinode-108000-m02" ...
	I0926 18:16:10.905397    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .Start
	I0926 18:16:10.905658    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:16:10.905745    5496 main.go:141] libmachine: (multinode-108000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/hyperkit.pid
	I0926 18:16:10.907373    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | hyperkit pid 5421 missing from process table
	I0926 18:16:10.907399    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | pid 5421 is in state "Stopped"
	I0926 18:16:10.907425    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/hyperkit.pid...
	I0926 18:16:10.907804    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | Using UUID e259e2c5-bca0-4baf-a344-b5e82f91b394
	I0926 18:16:10.936208    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | Generated MAC ee:f:11:b8:c4:d4
	I0926 18:16:10.936235    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-108000
	I0926 18:16:10.936402    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | 2024/09/26 18:16:10 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e259e2c5-bca0-4baf-a344-b5e82f91b394", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aac00)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0926 18:16:10.936438    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | 2024/09/26 18:16:10 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e259e2c5-bca0-4baf-a344-b5e82f91b394", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aac00)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0926 18:16:10.936545    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | 2024/09/26 18:16:10 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "e259e2c5-bca0-4baf-a344-b5e82f91b394", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/multinode-108000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/bzimage,/Users/j
enkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-108000"}
	I0926 18:16:10.936616    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | 2024/09/26 18:16:10 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U e259e2c5-bca0-4baf-a344-b5e82f91b394 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/multinode-108000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/tty,log=/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/bzimage,/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/mult
inode-108000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-108000"
	I0926 18:16:10.936644    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | 2024/09/26 18:16:10 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0926 18:16:10.938132    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | 2024/09/26 18:16:10 DEBUG: hyperkit: Pid is 5532
	I0926 18:16:10.938620    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | Attempt 0
	I0926 18:16:10.938633    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:16:10.938691    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | hyperkit pid from json: 5532
	I0926 18:16:10.940910    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | Searching for ee:f:11:b8:c4:d4 in /var/db/dhcpd_leases ...
	I0926 18:16:10.940974    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0926 18:16:10.941019    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6e:13:d0:11:59:38 ID:1,6e:13:d0:11:59:38 Lease:0x66f758a8}
	I0926 18:16:10.941052    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:c9:e:44:fd:98 ID:1,56:c9:e:44:fd:98 Lease:0x66f6070c}
	I0926 18:16:10.941090    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ee:f:11:b8:c4:d4 ID:1,ee:f:11:b8:c4:d4 Lease:0x66f75815}
	I0926 18:16:10.941107    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetConfigRaw
	I0926 18:16:10.941107    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | Found match: ee:f:11:b8:c4:d4
	I0926 18:16:10.941164    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | IP: 192.169.0.15
	I0926 18:16:10.941862    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetIP
	I0926 18:16:10.942059    5496 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/multinode-108000/config.json ...
	I0926 18:16:10.942737    5496 machine.go:93] provisionDockerMachine start ...
	I0926 18:16:10.942749    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .DriverName
	I0926 18:16:10.942906    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHHostname
	I0926 18:16:10.943008    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHPort
	I0926 18:16:10.943101    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHKeyPath
	I0926 18:16:10.943238    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHKeyPath
	I0926 18:16:10.943327    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHUsername
	I0926 18:16:10.943460    5496 main.go:141] libmachine: Using SSH client type: native
	I0926 18:16:10.943664    5496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc5b7d00] 0xc5ba9e0 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0926 18:16:10.943672    5496 main.go:141] libmachine: About to run SSH command:
	hostname
	I0926 18:16:10.946770    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | 2024/09/26 18:16:10 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I0926 18:16:10.955292    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | 2024/09/26 18:16:10 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0926 18:16:10.956603    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | 2024/09/26 18:16:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 18:16:10.956628    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | 2024/09/26 18:16:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 18:16:10.956642    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | 2024/09/26 18:16:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 18:16:10.956655    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | 2024/09/26 18:16:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 18:16:11.342405    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | 2024/09/26 18:16:11 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0926 18:16:11.342417    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | 2024/09/26 18:16:11 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0926 18:16:11.457102    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | 2024/09/26 18:16:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0926 18:16:11.457120    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | 2024/09/26 18:16:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0926 18:16:11.457194    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | 2024/09/26 18:16:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0926 18:16:11.457223    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | 2024/09/26 18:16:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0926 18:16:11.457964    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | 2024/09/26 18:16:11 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0926 18:16:11.457973    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | 2024/09/26 18:16:11 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0926 18:16:17.102016    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | 2024/09/26 18:16:17 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0926 18:16:17.102068    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | 2024/09/26 18:16:17 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0926 18:16:17.102076    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | 2024/09/26 18:16:17 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0926 18:16:17.126475    5496 main.go:141] libmachine: (multinode-108000-m02) DBG | 2024/09/26 18:16:17 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0926 18:16:21.114075    5496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.169.0.15:22: connect: connection refused
	I0926 18:16:24.168910    5496 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0926 18:16:24.168927    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetMachineName
	I0926 18:16:24.169054    5496 buildroot.go:166] provisioning hostname "multinode-108000-m02"
	I0926 18:16:24.169065    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetMachineName
	I0926 18:16:24.169159    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHHostname
	I0926 18:16:24.169251    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHPort
	I0926 18:16:24.169357    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHKeyPath
	I0926 18:16:24.169436    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHKeyPath
	I0926 18:16:24.169520    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHUsername
	I0926 18:16:24.169682    5496 main.go:141] libmachine: Using SSH client type: native
	I0926 18:16:24.169825    5496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc5b7d00] 0xc5ba9e0 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0926 18:16:24.169834    5496 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-108000-m02 && echo "multinode-108000-m02" | sudo tee /etc/hostname
	I0926 18:16:24.230948    5496 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-108000-m02
	
	I0926 18:16:24.230975    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHHostname
	I0926 18:16:24.231113    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHPort
	I0926 18:16:24.231215    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHKeyPath
	I0926 18:16:24.231304    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHKeyPath
	I0926 18:16:24.231397    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHUsername
	I0926 18:16:24.231531    5496 main.go:141] libmachine: Using SSH client type: native
	I0926 18:16:24.231674    5496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc5b7d00] 0xc5ba9e0 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0926 18:16:24.231686    5496 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-108000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-108000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-108000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0926 18:16:24.289165    5496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 18:16:24.289186    5496 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19711-1128/.minikube CaCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19711-1128/.minikube}
	I0926 18:16:24.289196    5496 buildroot.go:174] setting up certificates
	I0926 18:16:24.289203    5496 provision.go:84] configureAuth start
	I0926 18:16:24.289211    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetMachineName
	I0926 18:16:24.289341    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetIP
	I0926 18:16:24.289455    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHHostname
	I0926 18:16:24.289535    5496 provision.go:143] copyHostCerts
	I0926 18:16:24.289565    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem
	I0926 18:16:24.289626    5496 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem, removing ...
	I0926 18:16:24.289631    5496 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem
	I0926 18:16:24.289779    5496 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/ca.pem (1082 bytes)
	I0926 18:16:24.289981    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem
	I0926 18:16:24.290020    5496 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem, removing ...
	I0926 18:16:24.290026    5496 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem
	I0926 18:16:24.290112    5496 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/cert.pem (1123 bytes)
	I0926 18:16:24.290254    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem
	I0926 18:16:24.290293    5496 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem, removing ...
	I0926 18:16:24.290299    5496 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem
	I0926 18:16:24.290380    5496 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19711-1128/.minikube/key.pem (1675 bytes)
	I0926 18:16:24.290524    5496 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca-key.pem org=jenkins.multinode-108000-m02 san=[127.0.0.1 192.169.0.15 localhost minikube multinode-108000-m02]
	I0926 18:16:24.366522    5496 provision.go:177] copyRemoteCerts
	I0926 18:16:24.366572    5496 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0926 18:16:24.366585    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHHostname
	I0926 18:16:24.366716    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHPort
	I0926 18:16:24.366822    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHKeyPath
	I0926 18:16:24.366914    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHUsername
	I0926 18:16:24.367000    5496 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/id_rsa Username:docker}
	I0926 18:16:24.398912    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0926 18:16:24.398982    5496 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0926 18:16:24.417796    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0926 18:16:24.417875    5496 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0926 18:16:24.436574    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0926 18:16:24.436637    5496 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0926 18:16:24.455630    5496 provision.go:87] duration metric: took 166.418573ms to configureAuth
	I0926 18:16:24.455642    5496 buildroot.go:189] setting minikube options for container-runtime
	I0926 18:16:24.455800    5496 config.go:182] Loaded profile config "multinode-108000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 18:16:24.455814    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .DriverName
	I0926 18:16:24.455958    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHHostname
	I0926 18:16:24.456056    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHPort
	I0926 18:16:24.456142    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHKeyPath
	I0926 18:16:24.456215    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHKeyPath
	I0926 18:16:24.456298    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHUsername
	I0926 18:16:24.456433    5496 main.go:141] libmachine: Using SSH client type: native
	I0926 18:16:24.456556    5496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc5b7d00] 0xc5ba9e0 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0926 18:16:24.456563    5496 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0926 18:16:24.508014    5496 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0926 18:16:24.508027    5496 buildroot.go:70] root file system type: tmpfs
	I0926 18:16:24.508111    5496 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0926 18:16:24.508127    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHHostname
	I0926 18:16:24.508265    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHPort
	I0926 18:16:24.508363    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHKeyPath
	I0926 18:16:24.508438    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHKeyPath
	I0926 18:16:24.508531    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHUsername
	I0926 18:16:24.508667    5496 main.go:141] libmachine: Using SSH client type: native
	I0926 18:16:24.508806    5496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc5b7d00] 0xc5ba9e0 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0926 18:16:24.508850    5496 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.14"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0926 18:16:24.570548    5496 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.14
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0926 18:16:24.570570    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHHostname
	I0926 18:16:24.570708    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHPort
	I0926 18:16:24.570797    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHKeyPath
	I0926 18:16:24.570893    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHKeyPath
	I0926 18:16:24.570983    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHUsername
	I0926 18:16:24.571119    5496 main.go:141] libmachine: Using SSH client type: native
	I0926 18:16:24.571257    5496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc5b7d00] 0xc5ba9e0 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0926 18:16:24.571269    5496 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0926 18:16:26.151246    5496 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0926 18:16:26.151261    5496 machine.go:96] duration metric: took 15.208338586s to provisionDockerMachine
	I0926 18:16:26.151275    5496 start.go:293] postStartSetup for "multinode-108000-m02" (driver="hyperkit")
	I0926 18:16:26.151282    5496 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0926 18:16:26.151292    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .DriverName
	I0926 18:16:26.151502    5496 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0926 18:16:26.151516    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHHostname
	I0926 18:16:26.151624    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHPort
	I0926 18:16:26.151720    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHKeyPath
	I0926 18:16:26.151803    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHUsername
	I0926 18:16:26.151887    5496 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/id_rsa Username:docker}
	I0926 18:16:26.190229    5496 ssh_runner.go:195] Run: cat /etc/os-release
	I0926 18:16:26.194730    5496 command_runner.go:130] > NAME=Buildroot
	I0926 18:16:26.194741    5496 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0926 18:16:26.194745    5496 command_runner.go:130] > ID=buildroot
	I0926 18:16:26.194748    5496 command_runner.go:130] > VERSION_ID=2023.02.9
	I0926 18:16:26.194770    5496 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0926 18:16:26.194949    5496 info.go:137] Remote host: Buildroot 2023.02.9
	I0926 18:16:26.194961    5496 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1128/.minikube/addons for local assets ...
	I0926 18:16:26.195080    5496 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1128/.minikube/files for local assets ...
	I0926 18:16:26.195261    5496 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> 16792.pem in /etc/ssl/certs
	I0926 18:16:26.195267    5496 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem -> /etc/ssl/certs/16792.pem
	I0926 18:16:26.195477    5496 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0926 18:16:26.205182    5496 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/ssl/certs/16792.pem --> /etc/ssl/certs/16792.pem (1708 bytes)
	I0926 18:16:26.236499    5496 start.go:296] duration metric: took 85.214566ms for postStartSetup
	I0926 18:16:26.236519    5496 fix.go:56] duration metric: took 15.406578543s for fixHost
	I0926 18:16:26.236535    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHHostname
	I0926 18:16:26.236660    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHPort
	I0926 18:16:26.236739    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHKeyPath
	I0926 18:16:26.236846    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHKeyPath
	I0926 18:16:26.236930    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHUsername
	I0926 18:16:26.237066    5496 main.go:141] libmachine: Using SSH client type: native
	I0926 18:16:26.237215    5496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc5b7d00] 0xc5ba9e0 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0926 18:16:26.237222    5496 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0926 18:16:26.289010    5496 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727399786.357757601
	
	I0926 18:16:26.289021    5496 fix.go:216] guest clock: 1727399786.357757601
	I0926 18:16:26.289026    5496 fix.go:229] Guest: 2024-09-26 18:16:26.357757601 -0700 PDT Remote: 2024-09-26 18:16:26.236525 -0700 PDT m=+75.467120085 (delta=121.232601ms)
	I0926 18:16:26.289036    5496 fix.go:200] guest clock delta is within tolerance: 121.232601ms
	I0926 18:16:26.289040    5496 start.go:83] releasing machines lock for "multinode-108000-m02", held for 15.459115782s
	I0926 18:16:26.289057    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .DriverName
	I0926 18:16:26.289184    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetIP
	I0926 18:16:26.318797    5496 out.go:177] * Found network options:
	I0926 18:16:26.338485    5496 out.go:177]   - NO_PROXY=192.169.0.14
	W0926 18:16:26.375452    5496 proxy.go:119] fail to check proxy env: Error ip not in block
	I0926 18:16:26.375479    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .DriverName
	I0926 18:16:26.375952    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .DriverName
	I0926 18:16:26.376076    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .DriverName
	I0926 18:16:26.376176    5496 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	W0926 18:16:26.376179    5496 proxy.go:119] fail to check proxy env: Error ip not in block
	I0926 18:16:26.376196    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHHostname
	I0926 18:16:26.376244    5496 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0926 18:16:26.376254    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHHostname
	I0926 18:16:26.376311    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHPort
	I0926 18:16:26.376385    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHPort
	I0926 18:16:26.376424    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHKeyPath
	I0926 18:16:26.376521    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHUsername
	I0926 18:16:26.376537    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHKeyPath
	I0926 18:16:26.376633    5496 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/id_rsa Username:docker}
	I0926 18:16:26.376658    5496 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHUsername
	I0926 18:16:26.376741    5496 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/id_rsa Username:docker}
	I0926 18:16:26.405394    5496 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0926 18:16:26.405444    5496 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0926 18:16:26.405511    5496 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0926 18:16:26.454318    5496 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0926 18:16:26.455166    5496 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0926 18:16:26.455197    5496 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0926 18:16:26.455211    5496 start.go:495] detecting cgroup driver to use...
	I0926 18:16:26.455332    5496 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 18:16:26.470885    5496 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0926 18:16:26.471259    5496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0926 18:16:26.479939    5496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0926 18:16:26.488337    5496 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0926 18:16:26.488410    5496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0926 18:16:26.496681    5496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 18:16:26.505298    5496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0926 18:16:26.513668    5496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 18:16:26.522274    5496 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0926 18:16:26.531204    5496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0926 18:16:26.539719    5496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0926 18:16:26.547937    5496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0926 18:16:26.556289    5496 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0926 18:16:26.563677    5496 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0926 18:16:26.563695    5496 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0926 18:16:26.563744    5496 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0926 18:16:26.573401    5496 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0926 18:16:26.585751    5496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 18:16:26.682695    5496 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0926 18:16:26.701736    5496 start.go:495] detecting cgroup driver to use...
	I0926 18:16:26.701814    5496 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0926 18:16:26.718260    5496 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0926 18:16:26.718770    5496 command_runner.go:130] > [Unit]
	I0926 18:16:26.718778    5496 command_runner.go:130] > Description=Docker Application Container Engine
	I0926 18:16:26.718796    5496 command_runner.go:130] > Documentation=https://docs.docker.com
	I0926 18:16:26.718802    5496 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0926 18:16:26.718809    5496 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0926 18:16:26.718818    5496 command_runner.go:130] > StartLimitBurst=3
	I0926 18:16:26.718822    5496 command_runner.go:130] > StartLimitIntervalSec=60
	I0926 18:16:26.718826    5496 command_runner.go:130] > [Service]
	I0926 18:16:26.718829    5496 command_runner.go:130] > Type=notify
	I0926 18:16:26.718833    5496 command_runner.go:130] > Restart=on-failure
	I0926 18:16:26.718836    5496 command_runner.go:130] > Environment=NO_PROXY=192.169.0.14
	I0926 18:16:26.718847    5496 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0926 18:16:26.718853    5496 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0926 18:16:26.718859    5496 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0926 18:16:26.718865    5496 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0926 18:16:26.718870    5496 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0926 18:16:26.718875    5496 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0926 18:16:26.718881    5496 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0926 18:16:26.718889    5496 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0926 18:16:26.718895    5496 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0926 18:16:26.718899    5496 command_runner.go:130] > ExecStart=
	I0926 18:16:26.718912    5496 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0926 18:16:26.718929    5496 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0926 18:16:26.718944    5496 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0926 18:16:26.718951    5496 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0926 18:16:26.718954    5496 command_runner.go:130] > LimitNOFILE=infinity
	I0926 18:16:26.718958    5496 command_runner.go:130] > LimitNPROC=infinity
	I0926 18:16:26.718962    5496 command_runner.go:130] > LimitCORE=infinity
	I0926 18:16:26.718967    5496 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0926 18:16:26.718971    5496 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0926 18:16:26.718976    5496 command_runner.go:130] > TasksMax=infinity
	I0926 18:16:26.718979    5496 command_runner.go:130] > TimeoutStartSec=0
	I0926 18:16:26.718985    5496 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0926 18:16:26.718990    5496 command_runner.go:130] > Delegate=yes
	I0926 18:16:26.718995    5496 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0926 18:16:26.719005    5496 command_runner.go:130] > KillMode=process
	I0926 18:16:26.719008    5496 command_runner.go:130] > [Install]
	I0926 18:16:26.719013    5496 command_runner.go:130] > WantedBy=multi-user.target
	I0926 18:16:26.719096    5496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 18:16:26.731985    5496 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0926 18:16:26.749935    5496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 18:16:26.761214    5496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 18:16:26.771758    5496 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0926 18:16:26.794322    5496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 18:16:26.804929    5496 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 18:16:26.819754    5496 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0926 18:16:26.820015    5496 ssh_runner.go:195] Run: which cri-dockerd
	I0926 18:16:26.822756    5496 command_runner.go:130] > /usr/bin/cri-dockerd
	I0926 18:16:26.822954    5496 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0926 18:16:26.830085    5496 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0926 18:16:26.843567    5496 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0926 18:16:26.944121    5496 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0926 18:16:27.051128    5496 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0926 18:16:27.051158    5496 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0926 18:16:27.065233    5496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 18:16:27.171138    5496 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0926 18:17:28.193406    5496 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0926 18:17:28.193420    5496 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0926 18:17:28.193431    5496 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.021705353s)
	I0926 18:17:28.193497    5496 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0926 18:17:28.203177    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0926 18:17:28.203190    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:24.939272378Z" level=info msg="Starting up"
	I0926 18:17:28.203199    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:24.939744281Z" level=info msg="containerd not running, starting managed containerd"
	I0926 18:17:28.203212    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:24.940372696Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=496
	I0926 18:17:28.203223    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.955635497Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	I0926 18:17:28.203233    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975220104Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0926 18:17:28.203245    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975290387Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0926 18:17:28.203256    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975364574Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0926 18:17:28.203265    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975401354Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0926 18:17:28.203276    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975543498Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0926 18:17:28.203286    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975598213Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0926 18:17:28.203305    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975731849Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0926 18:17:28.203314    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975772849Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0926 18:17:28.203324    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975804657Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0926 18:17:28.203334    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975834070Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0926 18:17:28.203344    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975998842Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0926 18:17:28.203353    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.976165653Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0926 18:17:28.203371    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.977740780Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0926 18:17:28.203387    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.977823231Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0926 18:17:28.203424    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.977979310Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0926 18:17:28.203438    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.978024001Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0926 18:17:28.203448    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.978133741Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0926 18:17:28.203456    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.978192781Z" level=info msg="metadata content store policy set" policy=shared
	I0926 18:17:28.203464    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.979398865Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0926 18:17:28.203473    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.979452106Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0926 18:17:28.203481    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.979487510Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0926 18:17:28.203491    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.979520613Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0926 18:17:28.203499    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.979552321Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0926 18:17:28.203508    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.979616545Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0926 18:17:28.203517    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.979877476Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0926 18:17:28.203526    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.979969253Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0926 18:17:28.203535    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980006327Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0926 18:17:28.203544    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980040846Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0926 18:17:28.203554    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980075255Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0926 18:17:28.203563    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980114319Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0926 18:17:28.203573    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980148760Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0926 18:17:28.203582    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980189045Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0926 18:17:28.203591    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980223417Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0926 18:17:28.203600    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980253164Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0926 18:17:28.203689    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980282269Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0926 18:17:28.203700    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980310608Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0926 18:17:28.203709    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980348289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0926 18:17:28.203718    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980386978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0926 18:17:28.203727    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980418532Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0926 18:17:28.203736    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980449540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0926 18:17:28.203745    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980484042Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0926 18:17:28.203754    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980514235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0926 18:17:28.203763    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980543443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0926 18:17:28.203773    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980573293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0926 18:17:28.203785    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980609651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0926 18:17:28.203794    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980646773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0926 18:17:28.203802    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980677054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0926 18:17:28.203811    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980706205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0926 18:17:28.203819    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980735214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0926 18:17:28.203829    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980766272Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0926 18:17:28.203837    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980806833Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0926 18:17:28.203846    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980838839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0926 18:17:28.203855    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980868321Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0926 18:17:28.203865    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980965209Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0926 18:17:28.203876    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981007924Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0926 18:17:28.203885    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981037680Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0926 18:17:28.204036    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981066963Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0926 18:17:28.204049    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981094655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0926 18:17:28.204060    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981124463Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0926 18:17:28.204068    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981155319Z" level=info msg="NRI interface is disabled by configuration."
	I0926 18:17:28.204076    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981325910Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0926 18:17:28.204085    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981412041Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0926 18:17:28.204093    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981496206Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0926 18:17:28.204103    5496 command_runner.go:130] > Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981538298Z" level=info msg="containerd successfully booted in 0.026518s"
	I0926 18:17:28.204111    5496 command_runner.go:130] > Sep 27 01:16:25 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:25.961351885Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0926 18:17:28.204119    5496 command_runner.go:130] > Sep 27 01:16:25 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:25.971609471Z" level=info msg="Loading containers: start."
	I0926 18:17:28.204137    5496 command_runner.go:130] > Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.079462380Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0926 18:17:28.204148    5496 command_runner.go:130] > Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.142922131Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0926 18:17:28.204161    5496 command_runner.go:130] > Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.187253380Z" level=warning msg="error locating sandbox id e0ffe3e7a49ab09d3d6f73ed7b97f2135141a0ed4c3e63d2a302fd9ffd181dbb: sandbox e0ffe3e7a49ab09d3d6f73ed7b97f2135141a0ed4c3e63d2a302fd9ffd181dbb not found"
	I0926 18:17:28.204171    5496 command_runner.go:130] > Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.187440681Z" level=info msg="Loading containers: done."
	I0926 18:17:28.204180    5496 command_runner.go:130] > Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.195076424Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	I0926 18:17:28.204187    5496 command_runner.go:130] > Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.195150891Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	I0926 18:17:28.204197    5496 command_runner.go:130] > Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.195197197Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	I0926 18:17:28.204204    5496 command_runner.go:130] > Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.195352314Z" level=info msg="Daemon has completed initialization"
	I0926 18:17:28.204213    5496 command_runner.go:130] > Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.217613628Z" level=info msg="API listen on /var/run/docker.sock"
	I0926 18:17:28.204220    5496 command_runner.go:130] > Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.217699368Z" level=info msg="API listen on [::]:2376"
	I0926 18:17:28.204226    5496 command_runner.go:130] > Sep 27 01:16:26 multinode-108000-m02 systemd[1]: Started Docker Application Container Engine.
	I0926 18:17:28.204236    5496 command_runner.go:130] > Sep 27 01:16:27 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:27.252125643Z" level=info msg="Processing signal 'terminated'"
	I0926 18:17:28.204267    5496 command_runner.go:130] > Sep 27 01:16:27 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:27.252968662Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0926 18:17:28.204277    5496 command_runner.go:130] > Sep 27 01:16:27 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:27.253242428Z" level=info msg="Daemon shutdown complete"
	I0926 18:17:28.204285    5496 command_runner.go:130] > Sep 27 01:16:27 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:27.253285728Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0926 18:17:28.204296    5496 command_runner.go:130] > Sep 27 01:16:27 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:27.253375422Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0926 18:17:28.204303    5496 command_runner.go:130] > Sep 27 01:16:27 multinode-108000-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0926 18:17:28.204308    5496 command_runner.go:130] > Sep 27 01:16:28 multinode-108000-m02 systemd[1]: docker.service: Deactivated successfully.
	I0926 18:17:28.204314    5496 command_runner.go:130] > Sep 27 01:16:28 multinode-108000-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0926 18:17:28.204320    5496 command_runner.go:130] > Sep 27 01:16:28 multinode-108000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0926 18:17:28.204326    5496 command_runner.go:130] > Sep 27 01:16:28 multinode-108000-m02 dockerd[907]: time="2024-09-27T01:16:28.287366515Z" level=info msg="Starting up"
	I0926 18:17:28.204336    5496 command_runner.go:130] > Sep 27 01:17:28 multinode-108000-m02 dockerd[907]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0926 18:17:28.204343    5496 command_runner.go:130] > Sep 27 01:17:28 multinode-108000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0926 18:17:28.204349    5496 command_runner.go:130] > Sep 27 01:17:28 multinode-108000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0926 18:17:28.204355    5496 command_runner.go:130] > Sep 27 01:17:28 multinode-108000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0926 18:17:28.231231    5496 out.go:201] 
	W0926 18:17:28.253111    5496 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 27 01:16:24 multinode-108000-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 27 01:16:24 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:24.939272378Z" level=info msg="Starting up"
	Sep 27 01:16:24 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:24.939744281Z" level=info msg="containerd not running, starting managed containerd"
	Sep 27 01:16:24 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:24.940372696Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=496
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.955635497Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975220104Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975290387Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975364574Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975401354Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975543498Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975598213Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975731849Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975772849Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975804657Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975834070Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.975998842Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.976165653Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.977740780Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.977823231Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.977979310Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.978024001Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.978133741Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.978192781Z" level=info msg="metadata content store policy set" policy=shared
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.979398865Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.979452106Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.979487510Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.979520613Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.979552321Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.979616545Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.979877476Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.979969253Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980006327Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980040846Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980075255Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980114319Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980148760Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980189045Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980223417Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980253164Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980282269Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980310608Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980348289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980386978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980418532Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980449540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980484042Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980514235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980543443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980573293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980609651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980646773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980677054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980706205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980735214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980766272Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980806833Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980838839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980868321Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.980965209Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981007924Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981037680Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981066963Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981094655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981124463Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981155319Z" level=info msg="NRI interface is disabled by configuration."
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981325910Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981412041Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981496206Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 27 01:16:24 multinode-108000-m02 dockerd[496]: time="2024-09-27T01:16:24.981538298Z" level=info msg="containerd successfully booted in 0.026518s"
	Sep 27 01:16:25 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:25.961351885Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 27 01:16:25 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:25.971609471Z" level=info msg="Loading containers: start."
	Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.079462380Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.142922131Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.187253380Z" level=warning msg="error locating sandbox id e0ffe3e7a49ab09d3d6f73ed7b97f2135141a0ed4c3e63d2a302fd9ffd181dbb: sandbox e0ffe3e7a49ab09d3d6f73ed7b97f2135141a0ed4c3e63d2a302fd9ffd181dbb not found"
	Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.187440681Z" level=info msg="Loading containers: done."
	Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.195076424Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.195150891Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.195197197Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.195352314Z" level=info msg="Daemon has completed initialization"
	Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.217613628Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 27 01:16:26 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:26.217699368Z" level=info msg="API listen on [::]:2376"
	Sep 27 01:16:26 multinode-108000-m02 systemd[1]: Started Docker Application Container Engine.
	Sep 27 01:16:27 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:27.252125643Z" level=info msg="Processing signal 'terminated'"
	Sep 27 01:16:27 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:27.252968662Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 27 01:16:27 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:27.253242428Z" level=info msg="Daemon shutdown complete"
	Sep 27 01:16:27 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:27.253285728Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 27 01:16:27 multinode-108000-m02 dockerd[489]: time="2024-09-27T01:16:27.253375422Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 27 01:16:27 multinode-108000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Sep 27 01:16:28 multinode-108000-m02 systemd[1]: docker.service: Deactivated successfully.
	Sep 27 01:16:28 multinode-108000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Sep 27 01:16:28 multinode-108000-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 27 01:16:28 multinode-108000-m02 dockerd[907]: time="2024-09-27T01:16:28.287366515Z" level=info msg="Starting up"
	Sep 27 01:17:28 multinode-108000-m02 dockerd[907]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 27 01:17:28 multinode-108000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 27 01:17:28 multinode-108000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 27 01:17:28 multinode-108000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0926 18:17:28.253218    5496 out.go:270] * 
	W0926 18:17:28.254521    5496 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 18:17:28.316658    5496 out.go:201] 
	
	
	==> Docker <==
	Sep 27 01:16:06 multinode-108000 dockerd[912]: time="2024-09-27T01:16:06.608534537Z" level=info msg="shim disconnected" id=ac547d6aae7299eed1885a92494f0a70555f42e09d2e9ca7a677d150b3b743e9 namespace=moby
	Sep 27 01:16:06 multinode-108000 dockerd[912]: time="2024-09-27T01:16:06.608712265Z" level=warning msg="cleaning up after shim disconnected" id=ac547d6aae7299eed1885a92494f0a70555f42e09d2e9ca7a677d150b3b743e9 namespace=moby
	Sep 27 01:16:06 multinode-108000 dockerd[912]: time="2024-09-27T01:16:06.608766304Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 27 01:16:07 multinode-108000 dockerd[912]: time="2024-09-27T01:16:07.874730888Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 27 01:16:07 multinode-108000 dockerd[912]: time="2024-09-27T01:16:07.875329684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 27 01:16:07 multinode-108000 dockerd[912]: time="2024-09-27T01:16:07.875455442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 01:16:07 multinode-108000 dockerd[912]: time="2024-09-27T01:16:07.875994256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 01:16:07 multinode-108000 dockerd[912]: time="2024-09-27T01:16:07.922145200Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 27 01:16:07 multinode-108000 dockerd[912]: time="2024-09-27T01:16:07.922318504Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 27 01:16:07 multinode-108000 dockerd[912]: time="2024-09-27T01:16:07.922365398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 01:16:07 multinode-108000 dockerd[912]: time="2024-09-27T01:16:07.922580499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 01:16:08 multinode-108000 cri-dockerd[1160]: time="2024-09-27T01:16:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/195c2af29cf8ecb44fca8c97c843b62e72ed566ab494280e22ab2181e7393e76/resolv.conf as [nameserver 192.169.0.1]"
	Sep 27 01:16:08 multinode-108000 dockerd[912]: time="2024-09-27T01:16:08.090708401Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 27 01:16:08 multinode-108000 dockerd[912]: time="2024-09-27T01:16:08.090882251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 27 01:16:08 multinode-108000 dockerd[912]: time="2024-09-27T01:16:08.090910493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 01:16:08 multinode-108000 dockerd[912]: time="2024-09-27T01:16:08.091066428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 01:16:08 multinode-108000 cri-dockerd[1160]: time="2024-09-27T01:16:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8c57284b0e427a47697bfd73df9c94fab1009df34e710793bf7a7b9814db2de4/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 27 01:16:08 multinode-108000 dockerd[912]: time="2024-09-27T01:16:08.182489093Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 27 01:16:08 multinode-108000 dockerd[912]: time="2024-09-27T01:16:08.182569966Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 27 01:16:08 multinode-108000 dockerd[912]: time="2024-09-27T01:16:08.182592821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 01:16:08 multinode-108000 dockerd[912]: time="2024-09-27T01:16:08.182841946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 01:16:22 multinode-108000 dockerd[912]: time="2024-09-27T01:16:22.946101815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 27 01:16:22 multinode-108000 dockerd[912]: time="2024-09-27T01:16:22.946207835Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 27 01:16:22 multinode-108000 dockerd[912]: time="2024-09-27T01:16:22.946220437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 01:16:22 multinode-108000 dockerd[912]: time="2024-09-27T01:16:22.946717772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	584d51dd604ad       6e38f40d628db       About a minute ago   Running             storage-provisioner       4                   7b8ee55c02277       storage-provisioner
	144857a4b842a       8c811b4aec35f       About a minute ago   Running             busybox                   2                   8c57284b0e427       busybox-7dff88458-p6dk8
	d4e43d846e1fc       c69fa2e9cbf5f       About a minute ago   Running             coredns                   2                   195c2af29cf8e       coredns-7c65d6cfc9-hxdhm
	0c7449c406d4e       12968670680f4       About a minute ago   Running             kindnet-cni               2                   872a30c4047f1       kindnet-wbk29
	e1f599a2004cc       60c005f310ff3       About a minute ago   Running             kube-proxy                2                   cb11825b0f8b8       kube-proxy-9kjdl
	ac547d6aae729       6e38f40d628db       About a minute ago   Exited              storage-provisioner       3                   7b8ee55c02277       storage-provisioner
	a62b405990619       2e96e5913fc06       About a minute ago   Running             etcd                      2                   4aea5e2be2e77       etcd-multinode-108000
	7a1bae355a2ec       9aa1fad941575       About a minute ago   Running             kube-scheduler            2                   8139a5f3d2856       kube-scheduler-multinode-108000
	88a3b3c33bff5       175ffd71cce3d       About a minute ago   Running             kube-controller-manager   2                   8e66e76528782       kube-controller-manager-multinode-108000
	e2ee40adba9a5       6bab7719df100       About a minute ago   Running             kube-apiserver            2                   e2ee9a4b586a0       kube-apiserver-multinode-108000
	fcdfeaa22bc53       8c811b4aec35f       4 minutes ago        Exited              busybox                   1                   39b3092e3871c       busybox-7dff88458-p6dk8
	264e74b184f31       c69fa2e9cbf5f       4 minutes ago        Exited              coredns                   1                   ae6756186a894       coredns-7c65d6cfc9-hxdhm
	aa5128e84e3c4       12968670680f4       5 minutes ago        Exited              kindnet-cni               1                   d28db07575ac2       kindnet-wbk29
	67dac98df54b4       60c005f310ff3       5 minutes ago        Exited              kube-proxy                1                   6c14e4e508173       kube-proxy-9kjdl
	0b00cd940822b       9aa1fad941575       5 minutes ago        Exited              kube-scheduler            1                   0d2737b4b4465       kube-scheduler-multinode-108000
	96b13fc13d926       2e96e5913fc06       5 minutes ago        Exited              etcd                      1                   e4d5b4323b94b       etcd-multinode-108000
	e8c9a9508a996       175ffd71cce3d       5 minutes ago        Exited              kube-controller-manager   1                   0e2ed0aa05665       kube-controller-manager-multinode-108000
	e8ecb49c95edb       6bab7719df100       5 minutes ago        Exited              kube-apiserver            1                   700ba38f29cdc       kube-apiserver-multinode-108000
	
	
	==> coredns [264e74b184f3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:41803 - 27710 "HINFO IN 7603396136542669407.4139474930871402941. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.013043782s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d4e43d846e1f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:40441 - 3338 "HINFO IN 3918999964421277028.3413838272920254096. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.011374914s
	
	
	==> describe nodes <==
	Name:               multinode-108000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-108000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=multinode-108000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_26T18_08_54_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 01:08:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-108000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 01:17:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 01:15:55 +0000   Fri, 27 Sep 2024 01:08:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 01:15:55 +0000   Fri, 27 Sep 2024 01:08:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 01:15:55 +0000   Fri, 27 Sep 2024 01:08:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 01:15:55 +0000   Fri, 27 Sep 2024 01:15:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.14
	  Hostname:    multinode-108000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 bd77448f16db4dc98e06ceee53138b99
	  System UUID:                1fff4af0-0000-0000-b682-f00d5d335588
	  Boot ID:                    6996b4b5-d98e-4f81-8a66-5964f726ba1f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-p6dk8                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m24s
	  kube-system                 coredns-7c65d6cfc9-hxdhm                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m31s
	  kube-system                 etcd-multinode-108000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m36s
	  kube-system                 kindnet-wbk29                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m31s
	  kube-system                 kube-apiserver-multinode-108000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m36s
	  kube-system                 kube-controller-manager-multinode-108000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m36s
	  kube-system                 kube-proxy-9kjdl                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m31s
	  kube-system                 kube-scheduler-multinode-108000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m36s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m29s                  kube-proxy       
	  Normal  Starting                 113s                   kube-proxy       
	  Normal  Starting                 5m3s                   kube-proxy       
	  Normal  Starting                 8m42s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m41s (x8 over 8m42s)  kubelet          Node multinode-108000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m41s (x8 over 8m42s)  kubelet          Node multinode-108000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m41s (x7 over 8m42s)  kubelet          Node multinode-108000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     8m36s                  kubelet          Node multinode-108000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m36s                  kubelet          Node multinode-108000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m36s                  kubelet          Node multinode-108000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 8m36s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m32s                  node-controller  Node multinode-108000 event: Registered Node multinode-108000 in Controller
	  Normal  NodeReady                8m12s                  kubelet          Node multinode-108000 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    5m8s (x8 over 5m8s)    kubelet          Node multinode-108000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m8s (x8 over 5m8s)    kubelet          Node multinode-108000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 5m8s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     5m8s (x7 over 5m8s)    kubelet          Node multinode-108000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m8s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m1s                   node-controller  Node multinode-108000 event: Registered Node multinode-108000 in Controller
	  Normal  Starting                 119s                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  119s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  118s (x8 over 119s)    kubelet          Node multinode-108000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s (x8 over 119s)    kubelet          Node multinode-108000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s (x7 over 119s)    kubelet          Node multinode-108000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           111s                   node-controller  Node multinode-108000 event: Registered Node multinode-108000 in Controller
	
	
	Name:               multinode-108000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-108000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=multinode-108000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_26T18_13_29_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 01:13:29 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-108000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 01:14:51 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 27 Sep 2024 01:13:45 +0000   Fri, 27 Sep 2024 01:16:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 27 Sep 2024 01:13:45 +0000   Fri, 27 Sep 2024 01:16:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 27 Sep 2024 01:13:45 +0000   Fri, 27 Sep 2024 01:16:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 27 Sep 2024 01:13:45 +0000   Fri, 27 Sep 2024 01:16:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.15
	  Hostname:    multinode-108000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 c8e0f21e83604ad9adde5f99b991627d
	  System UUID:                e2594baf-0000-0000-a344-b5e82f91b394
	  Boot ID:                    9c432b35-ffd0-4b93-b66b-753937215860
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-mnmmg    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kindnet-ktwmw              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m49s
	  kube-system                 kube-proxy-ngs2x           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m41s                  kube-proxy       
	  Normal  Starting                 3m57s                  kube-proxy       
	  Normal  NodeHasNoDiskPressure    7m49s (x2 over 7m49s)  kubelet          Node multinode-108000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m49s (x2 over 7m49s)  kubelet          Node multinode-108000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m49s (x2 over 7m49s)  kubelet          Node multinode-108000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                7m26s                  kubelet          Node multinode-108000-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  4m (x2 over 4m)        kubelet          Node multinode-108000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m (x2 over 4m)        kubelet          Node multinode-108000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m (x2 over 4m)        kubelet          Node multinode-108000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m                     kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m44s                  kubelet          Node multinode-108000-m02 status is now: NodeReady
	  Normal  RegisteredNode           111s                   node-controller  Node multinode-108000-m02 event: Registered Node multinode-108000-m02 in Controller
	  Normal  NodeNotReady             71s                    node-controller  Node multinode-108000-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +5.703170] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006921] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.715980] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.225244] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.520761] systemd-fstab-generator[461]: Ignoring "noauto" option for root device
	[  +0.107511] systemd-fstab-generator[473]: Ignoring "noauto" option for root device
	[  +1.885570] systemd-fstab-generator[834]: Ignoring "noauto" option for root device
	[  +0.269601] systemd-fstab-generator[871]: Ignoring "noauto" option for root device
	[  +0.099226] systemd-fstab-generator[883]: Ignoring "noauto" option for root device
	[  +0.060372] kauditd_printk_skb: 123 callbacks suppressed
	[  +0.063315] systemd-fstab-generator[897]: Ignoring "noauto" option for root device
	[  +2.487073] systemd-fstab-generator[1113]: Ignoring "noauto" option for root device
	[  +0.101424] systemd-fstab-generator[1125]: Ignoring "noauto" option for root device
	[  +0.114646] systemd-fstab-generator[1137]: Ignoring "noauto" option for root device
	[  +0.121354] systemd-fstab-generator[1152]: Ignoring "noauto" option for root device
	[  +0.399164] systemd-fstab-generator[1274]: Ignoring "noauto" option for root device
	[  +1.856038] systemd-fstab-generator[1406]: Ignoring "noauto" option for root device
	[  +0.049222] kauditd_printk_skb: 180 callbacks suppressed
	[  +5.511430] kauditd_printk_skb: 62 callbacks suppressed
	[  +3.420339] systemd-fstab-generator[2245]: Ignoring "noauto" option for root device
	[Sep27 01:16] kauditd_printk_skb: 72 callbacks suppressed
	[ +16.390700] kauditd_printk_skb: 15 callbacks suppressed
	
	
	==> etcd [96b13fc13d92] <==
	{"level":"info","ts":"2024-09-27T01:12:23.749959Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3220d9553daad291 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-27T01:12:23.749981Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3220d9553daad291 received MsgPreVoteResp from 3220d9553daad291 at term 2"}
	{"level":"info","ts":"2024-09-27T01:12:23.749995Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3220d9553daad291 became candidate at term 3"}
	{"level":"info","ts":"2024-09-27T01:12:23.750003Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3220d9553daad291 received MsgVoteResp from 3220d9553daad291 at term 3"}
	{"level":"info","ts":"2024-09-27T01:12:23.750044Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3220d9553daad291 became leader at term 3"}
	{"level":"info","ts":"2024-09-27T01:12:23.750055Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3220d9553daad291 elected leader 3220d9553daad291 at term 3"}
	{"level":"info","ts":"2024-09-27T01:12:23.751849Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"3220d9553daad291","local-member-attributes":"{Name:multinode-108000 ClientURLs:[https://192.169.0.14:2379]}","request-path":"/0/members/3220d9553daad291/attributes","cluster-id":"9b2185e42760b005","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-27T01:12:23.751926Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T01:12:23.752683Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-27T01:12:23.752875Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-27T01:12:23.751973Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T01:12:23.754073Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T01:12:23.754313Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T01:12:23.754910Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.14:2379"}
	{"level":"info","ts":"2024-09-27T01:12:23.755336Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-27T01:15:02.761046Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-27T01:15:02.761117Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-108000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.14:2380"],"advertise-client-urls":["https://192.169.0.14:2379"]}
	{"level":"warn","ts":"2024-09-27T01:15:02.761186Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-27T01:15:02.761292Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-27T01:15:02.783353Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.14:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-27T01:15:02.783382Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.14:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-27T01:15:02.785079Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"3220d9553daad291","current-leader-member-id":"3220d9553daad291"}
	{"level":"info","ts":"2024-09-27T01:15:02.787672Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.14:2380"}
	{"level":"info","ts":"2024-09-27T01:15:02.787754Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.14:2380"}
	{"level":"info","ts":"2024-09-27T01:15:02.787768Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-108000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.14:2380"],"advertise-client-urls":["https://192.169.0.14:2379"]}
	
	
	==> etcd [a62b40599061] <==
	{"level":"info","ts":"2024-09-27T01:15:32.643236Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.169.0.14:2380"}
	{"level":"info","ts":"2024-09-27T01:15:32.643649Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-09-27T01:15:32.644430Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-27T01:15:32.645029Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-27T01:15:32.646519Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-27T01:15:32.646946Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3220d9553daad291 switched to configuration voters=(3612125861281190545)"}
	{"level":"info","ts":"2024-09-27T01:15:32.647076Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9b2185e42760b005","local-member-id":"3220d9553daad291","added-peer-id":"3220d9553daad291","added-peer-peer-urls":["https://192.169.0.14:2380"]}
	{"level":"info","ts":"2024-09-27T01:15:32.647878Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9b2185e42760b005","local-member-id":"3220d9553daad291","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T01:15:32.647957Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T01:15:34.434194Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3220d9553daad291 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-27T01:15:34.434240Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3220d9553daad291 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-27T01:15:34.434435Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3220d9553daad291 received MsgPreVoteResp from 3220d9553daad291 at term 3"}
	{"level":"info","ts":"2024-09-27T01:15:34.434496Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3220d9553daad291 became candidate at term 4"}
	{"level":"info","ts":"2024-09-27T01:15:34.434512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3220d9553daad291 received MsgVoteResp from 3220d9553daad291 at term 4"}
	{"level":"info","ts":"2024-09-27T01:15:34.434524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3220d9553daad291 became leader at term 4"}
	{"level":"info","ts":"2024-09-27T01:15:34.434534Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3220d9553daad291 elected leader 3220d9553daad291 at term 4"}
	{"level":"info","ts":"2024-09-27T01:15:34.436636Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"3220d9553daad291","local-member-attributes":"{Name:multinode-108000 ClientURLs:[https://192.169.0.14:2379]}","request-path":"/0/members/3220d9553daad291/attributes","cluster-id":"9b2185e42760b005","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-27T01:15:34.436763Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T01:15:34.436789Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T01:15:34.436929Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-27T01:15:34.437682Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-27T01:15:34.438250Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T01:15:34.438961Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-27T01:15:34.441810Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T01:15:34.445329Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.14:2379"}
	
	
	==> kernel <==
	 01:17:30 up 2 min,  0 users,  load average: 0.16, 0.10, 0.04
	Linux multinode-108000 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [0c7449c406d4] <==
	I0927 01:16:27.356288       1 main.go:322] Node multinode-108000-m02 has CIDR [10.244.1.0/24] 
	I0927 01:16:37.356307       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0927 01:16:37.356496       1 main.go:299] handling current node
	I0927 01:16:37.356540       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0927 01:16:37.356568       1 main.go:322] Node multinode-108000-m02 has CIDR [10.244.1.0/24] 
	I0927 01:16:47.361293       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0927 01:16:47.361368       1 main.go:299] handling current node
	I0927 01:16:47.361386       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0927 01:16:47.361395       1 main.go:322] Node multinode-108000-m02 has CIDR [10.244.1.0/24] 
	I0927 01:16:57.362330       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0927 01:16:57.362622       1 main.go:299] handling current node
	I0927 01:16:57.362868       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0927 01:16:57.363065       1 main.go:322] Node multinode-108000-m02 has CIDR [10.244.1.0/24] 
	I0927 01:17:07.356178       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0927 01:17:07.356397       1 main.go:299] handling current node
	I0927 01:17:07.356437       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0927 01:17:07.356578       1 main.go:322] Node multinode-108000-m02 has CIDR [10.244.1.0/24] 
	I0927 01:17:17.362779       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0927 01:17:17.362813       1 main.go:299] handling current node
	I0927 01:17:17.362828       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0927 01:17:17.362835       1 main.go:322] Node multinode-108000-m02 has CIDR [10.244.1.0/24] 
	I0927 01:17:27.356667       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0927 01:17:27.356738       1 main.go:299] handling current node
	I0927 01:17:27.356757       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0927 01:17:27.356766       1 main.go:322] Node multinode-108000-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [aa5128e84e3c] <==
	I0927 01:14:17.139285       1 main.go:295] Handling node with IPs: map[192.169.0.16:{}]
	I0927 01:14:17.139334       1 main.go:322] Node multinode-108000-m03 has CIDR [10.244.4.0/24] 
	I0927 01:14:27.135912       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0927 01:14:27.135963       1 main.go:299] handling current node
	I0927 01:14:27.135979       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0927 01:14:27.135986       1 main.go:322] Node multinode-108000-m02 has CIDR [10.244.1.0/24] 
	I0927 01:14:27.136067       1 main.go:295] Handling node with IPs: map[192.169.0.16:{}]
	I0927 01:14:27.136153       1 main.go:322] Node multinode-108000-m03 has CIDR [10.244.4.0/24] 
	I0927 01:14:37.136056       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0927 01:14:37.136342       1 main.go:322] Node multinode-108000-m02 has CIDR [10.244.1.0/24] 
	I0927 01:14:37.136568       1 main.go:295] Handling node with IPs: map[192.169.0.16:{}]
	I0927 01:14:37.136664       1 main.go:322] Node multinode-108000-m03 has CIDR [10.244.2.0/24] 
	I0927 01:14:37.136860       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.169.0.16 Flags: [] Table: 0} 
	I0927 01:14:37.137027       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0927 01:14:37.137113       1 main.go:299] handling current node
	I0927 01:14:47.140657       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0927 01:14:47.140695       1 main.go:299] handling current node
	I0927 01:14:47.140709       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0927 01:14:47.140713       1 main.go:322] Node multinode-108000-m02 has CIDR [10.244.1.0/24] 
	I0927 01:14:47.140952       1 main.go:295] Handling node with IPs: map[192.169.0.16:{}]
	I0927 01:14:47.140982       1 main.go:322] Node multinode-108000-m03 has CIDR [10.244.2.0/24] 
	I0927 01:14:57.140162       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0927 01:14:57.140327       1 main.go:299] handling current node
	I0927 01:14:57.140474       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0927 01:14:57.140678       1 main.go:322] Node multinode-108000-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [e2ee40adba9a] <==
	I0927 01:15:35.315670       1 aggregator.go:171] initial CRD sync complete...
	I0927 01:15:35.315698       1 autoregister_controller.go:144] Starting autoregister controller
	I0927 01:15:35.315702       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0927 01:15:35.316271       1 cache.go:39] Caches are synced for autoregister controller
	I0927 01:15:35.373621       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0927 01:15:35.373943       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0927 01:15:35.374032       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0927 01:15:35.373925       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0927 01:15:35.375835       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0927 01:15:35.376366       1 shared_informer.go:320] Caches are synced for configmaps
	I0927 01:15:35.376759       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0927 01:15:35.380558       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0927 01:15:35.382904       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0927 01:15:35.383042       1 policy_source.go:224] refreshing policies
	I0927 01:15:35.416163       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0927 01:15:35.423673       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0927 01:15:36.276923       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0927 01:15:36.683118       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.14]
	I0927 01:15:36.684414       1 controller.go:615] quota admission added evaluator for: endpoints
	I0927 01:15:36.688345       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0927 01:15:37.475793       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0927 01:15:37.573338       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0927 01:15:37.582313       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0927 01:15:37.618490       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0927 01:15:37.622377       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [e8ecb49c95ed] <==
	W0927 01:15:03.775377       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:15:03.775650       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:15:03.775780       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:15:03.775932       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:15:03.776052       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:15:03.776122       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:15:03.776327       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:15:03.776461       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:15:03.776573       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:15:03.776653       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:15:03.776880       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:15:03.777075       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:15:03.777185       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:15:03.776095       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:15:03.777563       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:15:03.777750       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:15:03.777828       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:15:03.777999       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:15:03.778167       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:15:03.778298       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:15:03.778430       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:15:03.778592       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:15:03.778767       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:15:03.778925       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:15:03.779052       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [88a3b3c33bff] <==
	I0927 01:15:39.125130       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="308.054849ms"
	I0927 01:15:39.126285       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="273.552µs"
	I0927 01:15:39.126741       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="309.796327ms"
	I0927 01:15:39.126854       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="42.429µs"
	I0927 01:15:39.291541       1 shared_informer.go:320] Caches are synced for garbage collector
	I0927 01:15:39.313683       1 shared_informer.go:320] Caches are synced for garbage collector
	I0927 01:15:39.314010       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0927 01:15:55.919547       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-108000-m02"
	I0927 01:15:55.920307       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-108000"
	I0927 01:15:55.926243       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-108000"
	I0927 01:15:58.862537       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-108000"
	I0927 01:16:08.477053       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="4.263871ms"
	I0927 01:16:08.477325       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="115.944µs"
	I0927 01:16:08.497514       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="64.524µs"
	I0927 01:16:08.511717       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="5.774718ms"
	I0927 01:16:08.511920       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="72.261µs"
	I0927 01:16:18.671623       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-pwrqj"
	I0927 01:16:18.681461       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-pwrqj"
	I0927 01:16:18.681495       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-qlv2x"
	I0927 01:16:18.691416       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-qlv2x"
	I0927 01:16:18.870919       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-108000-m02"
	I0927 01:16:18.881054       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-108000-m02"
	I0927 01:16:18.885334       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="8.056003ms"
	I0927 01:16:18.885645       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="22.623µs"
	I0927 01:16:23.907539       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-108000-m02"
	
	
	==> kube-controller-manager [e8c9a9508a99] <==
	I0927 01:13:55.665902       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="36.089µs"
	I0927 01:13:55.831187       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.136µs"
	I0927 01:13:55.833041       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="23.547µs"
	I0927 01:13:56.855154       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="3.229864ms"
	I0927 01:13:56.855642       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="22.164µs"
	I0927 01:14:29.155413       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-108000-m03"
	I0927 01:14:29.162721       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-108000-m03"
	I0927 01:14:29.311401       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-108000-m02"
	I0927 01:14:29.311857       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-108000-m03"
	I0927 01:14:30.226731       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-108000-m02"
	I0927 01:14:30.226777       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-108000-m03\" does not exist"
	I0927 01:14:30.241140       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-108000-m03" podCIDRs=["10.244.2.0/24"]
	I0927 01:14:30.241178       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-108000-m03"
	I0927 01:14:30.241193       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-108000-m03"
	I0927 01:14:30.655327       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-108000-m03"
	I0927 01:14:30.941521       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-108000-m03"
	I0927 01:14:33.441584       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-108000-m03"
	I0927 01:14:40.457422       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-108000-m03"
	I0927 01:14:48.404826       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-108000-m03"
	I0927 01:14:48.406454       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-108000-m02"
	I0927 01:14:48.410725       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-108000-m03"
	I0927 01:14:51.063972       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-108000-m03"
	I0927 01:14:51.071014       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-108000-m03"
	I0927 01:14:51.367724       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-108000-m02"
	I0927 01:14:51.368027       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-108000-m03"
	
	
	==> kube-proxy [67dac98df54b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0927 01:12:26.303292       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0927 01:12:26.320311       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.14"]
	E0927 01:12:26.320404       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 01:12:26.371710       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0927 01:12:26.371802       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0927 01:12:26.371821       1 server_linux.go:169] "Using iptables Proxier"
	I0927 01:12:26.373822       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 01:12:26.374137       1 server.go:483] "Version info" version="v1.31.1"
	I0927 01:12:26.374164       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 01:12:26.375785       1 config.go:199] "Starting service config controller"
	I0927 01:12:26.376475       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 01:12:26.376920       1 config.go:328] "Starting node config controller"
	I0927 01:12:26.376946       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 01:12:26.379307       1 config.go:105] "Starting endpoint slice config controller"
	I0927 01:12:26.379335       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 01:12:26.478000       1 shared_informer.go:320] Caches are synced for node config
	I0927 01:12:26.478025       1 shared_informer.go:320] Caches are synced for service config
	I0927 01:12:26.479362       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [e1f599a2004c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0927 01:15:36.692365       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0927 01:15:36.704185       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.14"]
	E0927 01:15:36.704365       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 01:15:36.734201       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0927 01:15:36.734367       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0927 01:15:36.734488       1 server_linux.go:169] "Using iptables Proxier"
	I0927 01:15:36.736664       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 01:15:36.737044       1 server.go:483] "Version info" version="v1.31.1"
	I0927 01:15:36.737384       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 01:15:36.738734       1 config.go:199] "Starting service config controller"
	I0927 01:15:36.739169       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 01:15:36.739317       1 config.go:328] "Starting node config controller"
	I0927 01:15:36.739477       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 01:15:36.739660       1 config.go:105] "Starting endpoint slice config controller"
	I0927 01:15:36.739690       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 01:15:36.840232       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0927 01:15:36.840339       1 shared_informer.go:320] Caches are synced for node config
	I0927 01:15:36.840351       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [0b00cd940822] <==
	I0927 01:12:22.915858       1 serving.go:386] Generated self-signed cert in-memory
	W0927 01:12:24.631192       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0927 01:12:24.631231       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0927 01:12:24.631290       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0927 01:12:24.631295       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0927 01:12:24.692688       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0927 01:12:24.692800       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 01:12:24.694346       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0927 01:12:24.694495       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0927 01:12:24.694884       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0927 01:12:24.694892       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0927 01:12:24.795751       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0927 01:15:02.782835       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [7a1bae355a2e] <==
	I0927 01:15:33.240432       1 serving.go:386] Generated self-signed cert in-memory
	W0927 01:15:35.313864       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0927 01:15:35.313932       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0927 01:15:35.314113       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0927 01:15:35.314120       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0927 01:15:35.336003       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0927 01:15:35.336038       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 01:15:35.338627       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0927 01:15:35.338932       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0927 01:15:35.339533       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0927 01:15:35.339917       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0927 01:15:35.439606       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 27 01:15:45 multinode-108000 kubelet[1413]: E0927 01:15:45.901173    1413 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-hxdhm" podUID="ff9bbfa0-9278-44d7-abc5-7a38ed77ce23"
	Sep 27 01:15:45 multinode-108000 kubelet[1413]: E0927 01:15:45.911684    1413 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	Sep 27 01:15:47 multinode-108000 kubelet[1413]: E0927 01:15:47.901138    1413 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-p6dk8" podUID="b899d20e-fd7c-4cdb-8a0f-0e131a5bcfa7"
	Sep 27 01:15:47 multinode-108000 kubelet[1413]: E0927 01:15:47.901001    1413 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-hxdhm" podUID="ff9bbfa0-9278-44d7-abc5-7a38ed77ce23"
	Sep 27 01:15:49 multinode-108000 kubelet[1413]: E0927 01:15:49.900907    1413 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-p6dk8" podUID="b899d20e-fd7c-4cdb-8a0f-0e131a5bcfa7"
	Sep 27 01:15:49 multinode-108000 kubelet[1413]: E0927 01:15:49.901043    1413 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-hxdhm" podUID="ff9bbfa0-9278-44d7-abc5-7a38ed77ce23"
	Sep 27 01:15:51 multinode-108000 kubelet[1413]: E0927 01:15:51.591437    1413 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 27 01:15:51 multinode-108000 kubelet[1413]: E0927 01:15:51.591600    1413 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ff9bbfa0-9278-44d7-abc5-7a38ed77ce23-config-volume podName:ff9bbfa0-9278-44d7-abc5-7a38ed77ce23 nodeName:}" failed. No retries permitted until 2024-09-27 01:16:07.591580309 +0000 UTC m=+36.836031334 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ff9bbfa0-9278-44d7-abc5-7a38ed77ce23-config-volume") pod "coredns-7c65d6cfc9-hxdhm" (UID: "ff9bbfa0-9278-44d7-abc5-7a38ed77ce23") : object "kube-system"/"coredns" not registered
	Sep 27 01:15:51 multinode-108000 kubelet[1413]: E0927 01:15:51.692928    1413 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Sep 27 01:15:51 multinode-108000 kubelet[1413]: E0927 01:15:51.693136    1413 projected.go:194] Error preparing data for projected volume kube-api-access-xsscc for pod default/busybox-7dff88458-p6dk8: object "default"/"kube-root-ca.crt" not registered
	Sep 27 01:15:51 multinode-108000 kubelet[1413]: E0927 01:15:51.693249    1413 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b899d20e-fd7c-4cdb-8a0f-0e131a5bcfa7-kube-api-access-xsscc podName:b899d20e-fd7c-4cdb-8a0f-0e131a5bcfa7 nodeName:}" failed. No retries permitted until 2024-09-27 01:16:07.693231259 +0000 UTC m=+36.937682284 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-xsscc" (UniqueName: "kubernetes.io/projected/b899d20e-fd7c-4cdb-8a0f-0e131a5bcfa7-kube-api-access-xsscc") pod "busybox-7dff88458-p6dk8" (UID: "b899d20e-fd7c-4cdb-8a0f-0e131a5bcfa7") : object "default"/"kube-root-ca.crt" not registered
	Sep 27 01:16:07 multinode-108000 kubelet[1413]: I0927 01:16:07.444041    1413 scope.go:117] "RemoveContainer" containerID="c5d1e02f34101f2f07639aad5a1c06a994aaf188cfdbe8643008a25724801a34"
	Sep 27 01:16:07 multinode-108000 kubelet[1413]: I0927 01:16:07.444524    1413 scope.go:117] "RemoveContainer" containerID="ac547d6aae7299eed1885a92494f0a70555f42e09d2e9ca7a677d150b3b743e9"
	Sep 27 01:16:07 multinode-108000 kubelet[1413]: E0927 01:16:07.444647    1413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(e67377e5-f7c5-4625-9739-3703de1f4739)\"" pod="kube-system/storage-provisioner" podUID="e67377e5-f7c5-4625-9739-3703de1f4739"
	Sep 27 01:16:22 multinode-108000 kubelet[1413]: I0927 01:16:22.901730    1413 scope.go:117] "RemoveContainer" containerID="ac547d6aae7299eed1885a92494f0a70555f42e09d2e9ca7a677d150b3b743e9"
	Sep 27 01:16:30 multinode-108000 kubelet[1413]: E0927 01:16:30.912128    1413 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 27 01:16:30 multinode-108000 kubelet[1413]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 27 01:16:30 multinode-108000 kubelet[1413]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 27 01:16:30 multinode-108000 kubelet[1413]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 01:16:30 multinode-108000 kubelet[1413]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 27 01:17:30 multinode-108000 kubelet[1413]: E0927 01:17:30.912475    1413 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 27 01:17:30 multinode-108000 kubelet[1413]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 27 01:17:30 multinode-108000 kubelet[1413]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 27 01:17:30 multinode-108000 kubelet[1413]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 01:17:30 multinode-108000 kubelet[1413]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-108000 -n multinode-108000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-108000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartMultiNode (141.16s)

                                                
                                    
x
+
TestScheduledStopUnix (142.13s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-860000 --memory=2048 --driver=hyperkit 
E0926 18:22:57.610336    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/client.crt: no such file or directory" logger="UnhandledError"
E0926 18:23:14.520612    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p scheduled-stop-860000 --memory=2048 --driver=hyperkit : exit status 80 (2m16.796729115s)

                                                
                                                
-- stdout --
	* [scheduled-stop-860000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1128/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1128/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "scheduled-stop-860000" primary control-plane node in "scheduled-stop-860000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "scheduled-stop-860000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for f6:8a:28:14:69:c8
	* Failed to start hyperkit VM. Running "minikube delete -p scheduled-stop-860000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 1a:19:2e:2:77:f8
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 1a:19:2e:2:77:f8
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-860000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1128/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1128/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "scheduled-stop-860000" primary control-plane node in "scheduled-stop-860000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "scheduled-stop-860000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for f6:8a:28:14:69:c8
	* Failed to start hyperkit VM. Running "minikube delete -p scheduled-stop-860000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 1a:19:2e:2:77:f8
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 1a:19:2e:2:77:f8
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-09-26 18:23:47.001026 -0700 PDT m=+4189.738745272
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-860000 -n scheduled-stop-860000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-860000 -n scheduled-stop-860000: exit status 7 (80.753167ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0926 18:23:47.080090    5831 status.go:386] failed to get driver ip: getting IP: IP address is not set
	E0926 18:23:47.080112    5831 status.go:119] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-860000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "scheduled-stop-860000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-860000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-860000: (5.254536999s)
--- FAIL: TestScheduledStopUnix (142.13s)

                                                
                                    
x
+
TestPause/serial/Start (141.48s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-199000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p pause-199000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit : exit status 80 (2m21.394796905s)

                                                
                                                
-- stdout --
	* [pause-199000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1128/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1128/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "pause-199000" primary control-plane node in "pause-199000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "pause-199000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 7e:6e:d6:a2:58:92
	* Failed to start hyperkit VM. Running "minikube delete -p pause-199000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for f2:9a:7d:40:b8:70
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for f2:9a:7d:40:b8:70
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-amd64 start -p pause-199000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-199000 -n pause-199000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-199000 -n pause-199000: exit status 7 (80.906322ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0926 19:05:00.634406    8338 status.go:386] failed to get driver ip: getting IP: IP address is not set
	E0926 19:05:00.634431    8338 status.go:119] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-199000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestPause/serial/Start (141.48s)

                                                
                                    

Test pass (179/217)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 20.98
4 TestDownloadOnly/v1.20.0/preload-exists 0
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.29
9 TestDownloadOnly/v1.20.0/DeleteAll 0.23
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.21
12 TestDownloadOnly/v1.31.1/json-events 6.62
13 TestDownloadOnly/v1.31.1/preload-exists 0
16 TestDownloadOnly/v1.31.1/kubectl 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.29
18 TestDownloadOnly/v1.31.1/DeleteAll 0.23
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.21
21 TestBinaryMirror 0.97
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.17
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.15
27 TestAddons/Setup 226.62
29 TestAddons/serial/Volcano 45.15
31 TestAddons/serial/GCPAuth/Namespaces 0.11
34 TestAddons/parallel/Ingress 18.61
35 TestAddons/parallel/InspektorGadget 10.5
36 TestAddons/parallel/MetricsServer 5.46
38 TestAddons/parallel/CSI 52.58
39 TestAddons/parallel/Headlamp 17.6
40 TestAddons/parallel/CloudSpanner 5.52
41 TestAddons/parallel/LocalPath 52.42
42 TestAddons/parallel/NvidiaDevicePlugin 5.34
43 TestAddons/parallel/Yakd 11.7
44 TestAddons/StoppedEnableDisable 5.93
52 TestHyperKitDriverInstallOrUpdate 8.84
56 TestErrorSpam/start 1.32
57 TestErrorSpam/status 0.45
58 TestErrorSpam/pause 5.6
59 TestErrorSpam/unpause 173.3
60 TestErrorSpam/stop 155.81
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 167.96
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 38.94
67 TestFunctional/serial/KubeContext 0.04
68 TestFunctional/serial/KubectlGetPods 0.07
71 TestFunctional/serial/CacheCmd/cache/add_remote 2.98
72 TestFunctional/serial/CacheCmd/cache/add_local 1.36
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
74 TestFunctional/serial/CacheCmd/cache/list 0.08
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.17
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.03
77 TestFunctional/serial/CacheCmd/cache/delete 0.16
78 TestFunctional/serial/MinikubeKubectlCmd 1.21
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.6
80 TestFunctional/serial/ExtraConfig 37.79
81 TestFunctional/serial/ComponentHealth 0.05
82 TestFunctional/serial/LogsCmd 2.59
83 TestFunctional/serial/LogsFileCmd 2.79
84 TestFunctional/serial/InvalidService 4
86 TestFunctional/parallel/ConfigCmd 0.51
87 TestFunctional/parallel/DashboardCmd 15.58
88 TestFunctional/parallel/DryRun 1
89 TestFunctional/parallel/InternationalLanguage 0.49
90 TestFunctional/parallel/StatusCmd 0.53
94 TestFunctional/parallel/ServiceCmdConnect 7.57
95 TestFunctional/parallel/AddonsCmd 0.23
96 TestFunctional/parallel/PersistentVolumeClaim 32.16
98 TestFunctional/parallel/SSHCmd 0.29
99 TestFunctional/parallel/CpCmd 1.07
100 TestFunctional/parallel/MySQL 29.4
101 TestFunctional/parallel/FileSync 0.21
102 TestFunctional/parallel/CertSync 1.03
106 TestFunctional/parallel/NodeLabels 0.05
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.17
110 TestFunctional/parallel/License 0.61
111 TestFunctional/parallel/Version/short 0.1
112 TestFunctional/parallel/Version/components 0.49
113 TestFunctional/parallel/ImageCommands/ImageListShort 0.15
114 TestFunctional/parallel/ImageCommands/ImageListTable 0.16
115 TestFunctional/parallel/ImageCommands/ImageListJson 0.16
116 TestFunctional/parallel/ImageCommands/ImageListYaml 0.15
117 TestFunctional/parallel/ImageCommands/ImageBuild 2.48
118 TestFunctional/parallel/ImageCommands/Setup 1.9
119 TestFunctional/parallel/DockerEnv/bash 0.62
120 TestFunctional/parallel/UpdateContextCmd/no_changes 0.26
121 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.24
122 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.27
123 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.01
124 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.61
125 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.4
126 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.25
127 TestFunctional/parallel/ImageCommands/ImageRemove 0.3
128 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.46
129 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.32
130 TestFunctional/parallel/ServiceCmd/DeployApp 23.12
131 TestFunctional/parallel/ServiceCmd/List 0.2
132 TestFunctional/parallel/ServiceCmd/JSONOutput 0.21
133 TestFunctional/parallel/ServiceCmd/HTTPS 0.24
134 TestFunctional/parallel/ServiceCmd/Format 0.24
135 TestFunctional/parallel/ServiceCmd/URL 0.27
137 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.36
138 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
140 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.13
141 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
142 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.02
143 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.04
144 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.03
145 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.02
146 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
147 TestFunctional/parallel/ProfileCmd/profile_not_create 0.28
148 TestFunctional/parallel/ProfileCmd/profile_list 0.29
149 TestFunctional/parallel/ProfileCmd/profile_json_output 0.29
150 TestFunctional/parallel/MountCmd/any-port 7.06
151 TestFunctional/parallel/MountCmd/specific-port 1.67
152 TestFunctional/parallel/MountCmd/VerifyCleanup 2.09
153 TestFunctional/delete_echo-server_images 0.05
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 201.84
160 TestMultiControlPlane/serial/DeployApp 5.69
161 TestMultiControlPlane/serial/PingHostFromPods 1.31
162 TestMultiControlPlane/serial/AddWorkerNode 52.78
163 TestMultiControlPlane/serial/NodeLabels 0.05
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.48
165 TestMultiControlPlane/serial/CopyFile 9.05
166 TestMultiControlPlane/serial/StopSecondaryNode 8.7
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.4
168 TestMultiControlPlane/serial/RestartSecondaryNode 42.68
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.49
180 TestImageBuild/serial/Setup 37.41
181 TestImageBuild/serial/NormalBuild 1.85
182 TestImageBuild/serial/BuildWithBuildArg 0.85
183 TestImageBuild/serial/BuildWithDockerIgnore 0.68
184 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.82
188 TestJSONOutput/start/Command 77.63
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.49
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.46
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 8.36
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.58
216 TestMainNoArgs 0.08
217 TestMinikubeProfile 84.98
223 TestMultiNode/serial/FreshStart2Nodes 109.07
224 TestMultiNode/serial/DeployApp2Nodes 4.82
225 TestMultiNode/serial/PingHostFrom2Pods 0.89
226 TestMultiNode/serial/AddNode 45.95
227 TestMultiNode/serial/MultiNodeLabels 0.05
228 TestMultiNode/serial/ProfileList 0.37
229 TestMultiNode/serial/CopyFile 5.44
230 TestMultiNode/serial/StopNode 2.84
231 TestMultiNode/serial/StartAfterStop 36.5
232 TestMultiNode/serial/RestartKeepsNodes 188.57
233 TestMultiNode/serial/DeleteNode 3.37
234 TestMultiNode/serial/StopMultiNode 16.78
236 TestMultiNode/serial/ValidateNameConflict 46.27
240 TestPreload 181.35
243 TestSkaffold 113.59
246 TestRunningBinaryUpgrade 81.67
248 TestKubernetesUpgrade 1368.51
261 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 3.11
262 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 7.12
263 TestStoppedBinaryUpgrade/Setup 1
264 TestStoppedBinaryUpgrade/Upgrade 165.78
267 TestStoppedBinaryUpgrade/MinikubeLogs 2.75
276 TestNoKubernetes/serial/StartNoK8sWithVersion 0.4
277 TestNoKubernetes/serial/StartWithK8s 40.17
279 TestNoKubernetes/serial/StartWithStopK8s 17.82
280 TestNoKubernetes/serial/Start 18.84
281 TestNoKubernetes/serial/VerifyK8sNotRunning 0.13
282 TestNoKubernetes/serial/ProfileList 0.58
283 TestNoKubernetes/serial/Stop 2.38
284 TestNoKubernetes/serial/StartNoArgs 19.41
285 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.13
x
+
TestDownloadOnly/v1.20.0/json-events (20.98s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-592000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-592000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit : (20.98023869s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (20.98s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0926 17:14:18.105351    1679 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0926 17:14:18.105497    1679 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-592000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-592000: exit status 85 (290.958481ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-592000 | jenkins | v1.34.0 | 26 Sep 24 17:13 PDT |          |
	|         | -p download-only-592000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/26 17:13:57
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 17:13:57.178668    1680 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:13:57.178854    1680 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:13:57.178860    1680 out.go:358] Setting ErrFile to fd 2...
	I0926 17:13:57.178863    1680 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:13:57.179024    1680 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1128/.minikube/bin
	W0926 17:13:57.179119    1680 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19711-1128/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19711-1128/.minikube/config/config.json: no such file or directory
	I0926 17:13:57.180938    1680 out.go:352] Setting JSON to true
	I0926 17:13:57.206116    1680 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":807,"bootTime":1727395230,"procs":438,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0926 17:13:57.206269    1680 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 17:13:57.228870    1680 out.go:97] [download-only-592000] minikube v1.34.0 on Darwin 14.6.1
	I0926 17:13:57.229075    1680 notify.go:220] Checking for updates...
	W0926 17:13:57.229084    1680 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/preloaded-tarball: no such file or directory
	I0926 17:13:57.250299    1680 out.go:169] MINIKUBE_LOCATION=19711
	I0926 17:13:57.271591    1680 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19711-1128/kubeconfig
	I0926 17:13:57.293332    1680 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0926 17:13:57.314687    1680 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 17:13:57.337581    1680 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1128/.minikube
	W0926 17:13:57.379237    1680 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0926 17:13:57.379556    1680 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 17:13:57.426404    1680 out.go:97] Using the hyperkit driver based on user configuration
	I0926 17:13:57.426434    1680 start.go:297] selected driver: hyperkit
	I0926 17:13:57.426442    1680 start.go:901] validating driver "hyperkit" against <nil>
	I0926 17:13:57.426551    1680 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 17:13:57.426737    1680 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19711-1128/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0926 17:13:57.844391    1680 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0926 17:13:57.849320    1680 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:13:57.849342    1680 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0926 17:13:57.849373    1680 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0926 17:13:57.853776    1680 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0926 17:13:57.854282    1680 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0926 17:13:57.854312    1680 cni.go:84] Creating CNI manager for ""
	I0926 17:13:57.854365    1680 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0926 17:13:57.854437    1680 start.go:340] cluster config:
	{Name:download-only-592000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-592000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 17:13:57.854656    1680 iso.go:125] acquiring lock: {Name:mka8a9c5a237c1e4ae233281d2ff7965d13a843d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 17:13:57.877552    1680 out.go:97] Downloading VM boot image ...
	I0926 17:13:57.877622    1680 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso.sha256 -> /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0926 17:14:09.640867    1680 out.go:97] Starting "download-only-592000" primary control-plane node in "download-only-592000" cluster
	I0926 17:14:09.640914    1680 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0926 17:14:09.707456    1680 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0926 17:14:09.707479    1680 cache.go:56] Caching tarball of preloaded images
	I0926 17:14:09.708175    1680 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0926 17:14:09.728083    1680 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0926 17:14:09.728121    1680 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0926 17:14:09.809461    1680 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0926 17:14:16.060122    1680 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0926 17:14:16.060755    1680 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0926 17:14:16.611577    1680 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0926 17:14:16.611835    1680 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/download-only-592000/config.json ...
	I0926 17:14:16.611864    1680 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/download-only-592000/config.json: {Name:mke3a36a0464aebc3e5fe28bbd2dec7dd50fb56c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:14:16.613471    1680 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0926 17:14:16.613795    1680 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/darwin/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-592000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-592000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-592000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (6.62s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-120000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-120000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=hyperkit : (6.619468725s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (6.62s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0926 17:14:25.453936    1679 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0926 17:14:25.453968    1679 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
--- PASS: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-120000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-120000: exit status 85 (293.602163ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-592000 | jenkins | v1.34.0 | 26 Sep 24 17:13 PDT |                     |
	|         | -p download-only-592000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 26 Sep 24 17:14 PDT | 26 Sep 24 17:14 PDT |
	| delete  | -p download-only-592000        | download-only-592000 | jenkins | v1.34.0 | 26 Sep 24 17:14 PDT | 26 Sep 24 17:14 PDT |
	| start   | -o=json --download-only        | download-only-120000 | jenkins | v1.34.0 | 26 Sep 24 17:14 PDT |                     |
	|         | -p download-only-120000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/26 17:14:18
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 17:14:18.886761    1712 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:14:18.886935    1712 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:14:18.886941    1712 out.go:358] Setting ErrFile to fd 2...
	I0926 17:14:18.886945    1712 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:14:18.887708    1712 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1128/.minikube/bin
	I0926 17:14:18.889422    1712 out.go:352] Setting JSON to true
	I0926 17:14:18.912664    1712 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":828,"bootTime":1727395230,"procs":440,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0926 17:14:18.912812    1712 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 17:14:18.933637    1712 out.go:97] [download-only-120000] minikube v1.34.0 on Darwin 14.6.1
	I0926 17:14:18.933725    1712 notify.go:220] Checking for updates...
	I0926 17:14:18.954436    1712 out.go:169] MINIKUBE_LOCATION=19711
	I0926 17:14:18.975482    1712 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19711-1128/kubeconfig
	I0926 17:14:18.996622    1712 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0926 17:14:19.017537    1712 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 17:14:19.038731    1712 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1128/.minikube
	W0926 17:14:19.082533    1712 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0926 17:14:19.083006    1712 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 17:14:19.112787    1712 out.go:97] Using the hyperkit driver based on user configuration
	I0926 17:14:19.112847    1712 start.go:297] selected driver: hyperkit
	I0926 17:14:19.112862    1712 start.go:901] validating driver "hyperkit" against <nil>
	I0926 17:14:19.113056    1712 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 17:14:19.113265    1712 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19711-1128/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0926 17:14:19.124001    1712 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0926 17:14:19.128013    1712 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:14:19.128031    1712 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0926 17:14:19.128053    1712 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0926 17:14:19.130891    1712 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0926 17:14:19.131044    1712 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0926 17:14:19.131076    1712 cni.go:84] Creating CNI manager for ""
	I0926 17:14:19.131121    1712 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 17:14:19.131133    1712 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0926 17:14:19.131192    1712 start.go:340] cluster config:
	{Name:download-only-120000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-120000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 17:14:19.131284    1712 iso.go:125] acquiring lock: {Name:mka8a9c5a237c1e4ae233281d2ff7965d13a843d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 17:14:19.152596    1712 out.go:97] Starting "download-only-120000" primary control-plane node in "download-only-120000" cluster
	I0926 17:14:19.152623    1712 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 17:14:19.201720    1712 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0926 17:14:19.201762    1712 cache.go:56] Caching tarball of preloaded images
	I0926 17:14:19.201945    1712 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 17:14:19.222351    1712 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0926 17:14:19.222367    1712 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0926 17:14:19.296062    1712 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4?checksum=md5:42e9a173dd5f0c45ed1a890dd06aec5a -> /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0926 17:14:23.483509    1712 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0926 17:14:23.483705    1712 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19711-1128/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-120000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-120000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-120000
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestBinaryMirror (0.97s)

                                                
                                                
=== RUN   TestBinaryMirror
I0926 17:14:26.604459    1679 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/darwin/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-780000 --alsologtostderr --binary-mirror http://127.0.0.1:49641 --driver=hyperkit 
helpers_test.go:175: Cleaning up "binary-mirror-780000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-780000
--- PASS: TestBinaryMirror (0.97s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.17s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-433000
addons_test.go:975: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-433000: exit status 85 (165.404192ms)

                                                
                                                
-- stdout --
	* Profile "addons-433000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-433000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.17s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.15s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-433000
addons_test.go:986: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-433000: exit status 85 (145.117965ms)

                                                
                                                
-- stdout --
	* Profile "addons-433000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-433000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.15s)

                                                
                                    
x
+
TestAddons/Setup (226.62s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-433000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperkit  --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-darwin-amd64 start -p addons-433000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperkit  --addons=ingress --addons=ingress-dns: (3m46.615076511s)
--- PASS: TestAddons/Setup (226.62s)

                                                
                                    
x
+
TestAddons/serial/Volcano (45.15s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:851: volcano-controller stabilized in 12.089132ms
addons_test.go:835: volcano-scheduler stabilized in 12.190919ms
addons_test.go:843: volcano-admission stabilized in 12.28012ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-xtfw7" [5b2b3a15-e3c2-4684-8e22-b4ae71901209] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003195636s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-cgxxv" [ef25d1e2-c8d3-4b70-8ec0-791fc199bc65] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.002561989s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-bwcr8" [efbacb60-e1cb-45a1-aee3-9b16df2a3f46] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.002373947s
addons_test.go:870: (dbg) Run:  kubectl --context addons-433000 delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context addons-433000 create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context addons-433000 get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [73df9e45-ac77-450b-86c2-2d7e9d6043ba] Pending
helpers_test.go:344: "test-job-nginx-0" [73df9e45-ac77-450b-86c2-2d7e9d6043ba] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [73df9e45-ac77-450b-86c2-2d7e9d6043ba] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 18.004163073s
addons_test.go:906: (dbg) Run:  out/minikube-darwin-amd64 -p addons-433000 addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-darwin-amd64 -p addons-433000 addons disable volcano --alsologtostderr -v=1: (10.837171249s)
--- PASS: TestAddons/serial/Volcano (45.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-433000 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-433000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-433000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-433000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-433000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [27f507f2-c10c-4488-9f85-2e94f6945f00] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [27f507f2-c10c-4488-9f85-2e94f6945f00] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004062177s
I0926 17:29:00.332929    1679 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p addons-433000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context addons-433000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 -p addons-433000 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.169.0.2
addons_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p addons-433000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-darwin-amd64 -p addons-433000 addons disable ingress-dns --alsologtostderr -v=1: (1.109893191s)
addons_test.go:309: (dbg) Run:  out/minikube-darwin-amd64 -p addons-433000 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-darwin-amd64 -p addons-433000 addons disable ingress --alsologtostderr -v=1: (7.579340691s)
--- PASS: TestAddons/parallel/Ingress (18.61s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.5s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-9kgzh" [eb998461-5502-49d4-920e-a1444d6865f5] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005425941s
addons_test.go:789: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-433000
addons_test.go:789: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-433000: (5.495616003s)
--- PASS: TestAddons/parallel/InspektorGadget (10.50s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.46s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 1.810286ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-lt2hp" [2012f46c-0434-4a4c-bc66-5ff170a57a47] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003666449s
addons_test.go:413: (dbg) Run:  kubectl --context addons-433000 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-darwin-amd64 -p addons-433000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.46s)

                                                
                                    
x
+
TestAddons/parallel/CSI (52.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0926 17:28:17.748934    1679 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0926 17:28:17.752925    1679 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0926 17:28:17.752936    1679 kapi.go:107] duration metric: took 4.015166ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 4.021024ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-433000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-433000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [0dd81b09-3c50-49c7-872d-762e4bb2723a] Pending
helpers_test.go:344: "task-pv-pod" [0dd81b09-3c50-49c7-872d-762e4bb2723a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [0dd81b09-3c50-49c7-872d-762e4bb2723a] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.004768584s
addons_test.go:528: (dbg) Run:  kubectl --context addons-433000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-433000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-433000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-433000 delete pod task-pv-pod
addons_test.go:544: (dbg) Run:  kubectl --context addons-433000 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-433000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-433000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [6e517d2b-840d-4a6d-a997-3f6e40cb65a0] Pending
helpers_test.go:344: "task-pv-pod-restore" [6e517d2b-840d-4a6d-a997-3f6e40cb65a0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [6e517d2b-840d-4a6d-a997-3f6e40cb65a0] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004014041s
addons_test.go:570: (dbg) Run:  kubectl --context addons-433000 delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context addons-433000 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-433000 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-darwin-amd64 -p addons-433000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-darwin-amd64 -p addons-433000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.41235607s)
addons_test.go:586: (dbg) Run:  out/minikube-darwin-amd64 -p addons-433000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (52.58s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.6s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-433000 --alsologtostderr -v=1
addons_test.go:768: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-433000 --alsologtostderr -v=1: (1.125746974s)
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-wnvkv" [e3422dff-9186-470f-b660-e35c8e1af33b] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-wnvkv" [e3422dff-9186-470f-b660-e35c8e1af33b] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.005588683s
addons_test.go:777: (dbg) Run:  out/minikube-darwin-amd64 -p addons-433000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-darwin-amd64 -p addons-433000 addons disable headlamp --alsologtostderr -v=1: (5.472682953s)
--- PASS: TestAddons/parallel/Headlamp (17.60s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.52s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-p8d5s" [2d378cfc-f0d6-48be-8066-9be08e8cf028] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004736696s
addons_test.go:808: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-433000
--- PASS: TestAddons/parallel/CloudSpanner (5.52s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.42s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-433000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-433000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-433000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [aa2719e3-66cd-4000-965e-4c4ef2fadd4c] Pending
helpers_test.go:344: "test-local-path" [aa2719e3-66cd-4000-965e-4c4ef2fadd4c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [aa2719e3-66cd-4000-965e-4c4ef2fadd4c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [aa2719e3-66cd-4000-965e-4c4ef2fadd4c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.005949028s
addons_test.go:938: (dbg) Run:  kubectl --context addons-433000 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-darwin-amd64 -p addons-433000 ssh "cat /opt/local-path-provisioner/pvc-6ac8fe22-befd-49d4-b839-536d0bf298f4_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-433000 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-433000 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-darwin-amd64 -p addons-433000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-darwin-amd64 -p addons-433000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.78398279s)
--- PASS: TestAddons/parallel/LocalPath (52.42s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.34s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-mzlmc" [8d39b932-83e4-436e-9ba6-cf639dfdccfa] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004618642s
addons_test.go:1002: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-433000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.34s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-4l2r5" [390285ae-551b-4a19-83f5-d9720aa984ba] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004468698s
addons_test.go:1014: (dbg) Run:  out/minikube-darwin-amd64 -p addons-433000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-darwin-amd64 -p addons-433000 addons disable yakd --alsologtostderr -v=1: (5.691238847s)
--- PASS: TestAddons/parallel/Yakd (11.70s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (5.93s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-433000
addons_test.go:170: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-433000: (5.384444537s)
addons_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-433000
addons_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-433000
addons_test.go:183: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-433000
--- PASS: TestAddons/StoppedEnableDisable (5.93s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.84s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
I0926 18:26:02.031075    1679 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0926 18:26:02.031246    1679 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-without-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
W0926 18:26:02.871785    1679 install.go:62] docker-machine-driver-hyperkit: exit status 1
W0926 18:26:02.872015    1679 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I0926 18:26:02.872065    1679 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-amd64.sha256 -> /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperKitDriverInstallOrUpdate3729412911/001/docker-machine-driver-hyperkit
I0926 18:26:03.356754    1679 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-amd64.sha256 Dst:/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperKitDriverInstallOrUpdate3729412911/001/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x5539740 0x5539740 0x5539740 0x5539740 0x5539740 0x5539740 0x5539740] Decompressors:map[bz2:0xc00048d6b0 gz:0xc00048d6b8 tar:0xc00048d610 tar.bz2:0xc00048d620 tar.gz:0xc00048d630 tar.xz:0xc00048d690 tar.zst:0xc00048d6a0 tbz2:0xc00048d620 tgz:0xc00048d630 txz:0xc00048d690 tzst:0xc00048d6a0 xz:0xc00048d6c0 zip:0xc00048d6d0 zst:0xc00048d6c8] Getters:map[file:0xc000739390 http:0xc00098d450 https:0xc00098d4a0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: inval
id checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I0926 18:26:03.356801    1679 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperKitDriverInstallOrUpdate3729412911/001/docker-machine-driver-hyperkit
I0926 18:26:06.663676    1679 install.go:79] stdout: 
W0926 18:26:06.663812    1679 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperKitDriverInstallOrUpdate3729412911/001/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperKitDriverInstallOrUpdate3729412911/001/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I0926 18:26:06.663844    1679 install.go:99] testing: [sudo -n chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperKitDriverInstallOrUpdate3729412911/001/docker-machine-driver-hyperkit]
I0926 18:26:06.680660    1679 install.go:106] running: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperKitDriverInstallOrUpdate3729412911/001/docker-machine-driver-hyperkit]
I0926 18:26:06.696130    1679 install.go:99] testing: [sudo -n chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperKitDriverInstallOrUpdate3729412911/001/docker-machine-driver-hyperkit]
I0926 18:26:06.710875    1679 install.go:106] running: [sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperKitDriverInstallOrUpdate3729412911/001/docker-machine-driver-hyperkit]
I0926 18:26:06.739789    1679 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0926 18:26:06.739918    1679 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-older-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
I0926 18:26:07.523727    1679 install.go:137] /Users/jenkins/workspace/testdata/hyperkit-driver-older-version/docker-machine-driver-hyperkit version is 1.2.0
W0926 18:26:07.523750    1679 install.go:62] docker-machine-driver-hyperkit: docker-machine-driver-hyperkit is version 1.2.0, want 1.11.0
W0926 18:26:07.523813    1679 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I0926 18:26:07.523849    1679 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-amd64.sha256 -> /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperKitDriverInstallOrUpdate3729412911/002/docker-machine-driver-hyperkit
I0926 18:26:07.908075    1679 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-amd64.sha256 Dst:/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperKitDriverInstallOrUpdate3729412911/002/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x5539740 0x5539740 0x5539740 0x5539740 0x5539740 0x5539740 0x5539740] Decompressors:map[bz2:0xc00048d6b0 gz:0xc00048d6b8 tar:0xc00048d610 tar.bz2:0xc00048d620 tar.gz:0xc00048d630 tar.xz:0xc00048d690 tar.zst:0xc00048d6a0 tbz2:0xc00048d620 tgz:0xc00048d630 txz:0xc00048d690 tzst:0xc00048d6a0 xz:0xc00048d6c0 zip:0xc00048d6d0 zst:0xc00048d6c8] Getters:map[file:0xc0005efc30 http:0xc00028d130 https:0xc00028d180] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: inval
id checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I0926 18:26:07.908107    1679 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperKitDriverInstallOrUpdate3729412911/002/docker-machine-driver-hyperkit
I0926 18:26:10.794539    1679 install.go:79] stdout: 
W0926 18:26:10.794677    1679 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperKitDriverInstallOrUpdate3729412911/002/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperKitDriverInstallOrUpdate3729412911/002/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I0926 18:26:10.794703    1679 install.go:99] testing: [sudo -n chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperKitDriverInstallOrUpdate3729412911/002/docker-machine-driver-hyperkit]
I0926 18:26:10.810161    1679 install.go:106] running: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperKitDriverInstallOrUpdate3729412911/002/docker-machine-driver-hyperkit]
I0926 18:26:10.825710    1679 install.go:99] testing: [sudo -n chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperKitDriverInstallOrUpdate3729412911/002/docker-machine-driver-hyperkit]
I0926 18:26:10.839903    1679 install.go:106] running: [sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperKitDriverInstallOrUpdate3729412911/002/docker-machine-driver-hyperkit]
--- PASS: TestHyperKitDriverInstallOrUpdate (8.84s)

                                                
                                    
x
+
TestErrorSpam/start (1.32s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-580000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-580000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-580000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-580000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-580000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-580000 start --dry-run
--- PASS: TestErrorSpam/start (1.32s)

                                                
                                    
x
+
TestErrorSpam/status (0.45s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-580000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-580000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p nospam-580000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-580000 status: exit status 6 (149.370613ms)

                                                
                                                
-- stdout --
	nospam-580000
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0926 17:30:35.074070    2555 status.go:451] kubeconfig endpoint: get endpoint: "nospam-580000" does not appear in /Users/jenkins/minikube-integration/19711-1128/kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-darwin-amd64 -p nospam-580000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-580000 status" failed: exit status 6
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-580000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-580000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p nospam-580000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-580000 status: exit status 6 (152.380927ms)

                                                
                                                
-- stdout --
	nospam-580000
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0926 17:30:35.226687    2560 status.go:451] kubeconfig endpoint: get endpoint: "nospam-580000" does not appear in /Users/jenkins/minikube-integration/19711-1128/kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-darwin-amd64 -p nospam-580000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-580000 status" failed: exit status 6
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-580000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-580000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p nospam-580000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-580000 status: exit status 6 (149.635794ms)

                                                
                                                
-- stdout --
	nospam-580000
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0926 17:30:35.376565    2565 status.go:451] kubeconfig endpoint: get endpoint: "nospam-580000" does not appear in /Users/jenkins/minikube-integration/19711-1128/kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:184: "out/minikube-darwin-amd64 -p nospam-580000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-580000 status" failed: exit status 6
--- PASS: TestErrorSpam/status (0.45s)

                                                
                                    
x
+
TestErrorSpam/pause (5.6s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-580000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-580000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p nospam-580000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-580000 pause: exit status 80 (1.661772573s)

                                                
                                                
-- stdout --
	* Pausing node nospam-580000 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-darwin-amd64 -p nospam-580000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-580000 pause" failed: exit status 80
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-580000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-580000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p nospam-580000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-580000 pause: exit status 80 (1.955104027s)

                                                
                                                
-- stdout --
	* Pausing node nospam-580000 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-darwin-amd64 -p nospam-580000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-580000 pause" failed: exit status 80
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-580000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-580000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p nospam-580000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-580000 pause: exit status 80 (1.985976804s)

                                                
                                                
-- stdout --
	* Pausing node nospam-580000 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:184: "out/minikube-darwin-amd64 -p nospam-580000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-580000 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.60s)

                                                
                                    
x
+
TestErrorSpam/unpause (173.3s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-580000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-580000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p nospam-580000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-580000 unpause: exit status 80 (52.860726353s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-580000 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: docker: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format=<no value>: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-darwin-amd64 -p nospam-580000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-580000 unpause" failed: exit status 80
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-580000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-580000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p nospam-580000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-580000 unpause: exit status 80 (1m0.228126335s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-580000 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: docker: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format=<no value>: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-darwin-amd64 -p nospam-580000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-580000 unpause" failed: exit status 80
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-580000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-580000 unpause
E0926 17:33:14.347778    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:33:14.355386    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:33:14.369120    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:33:14.392203    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:33:14.435689    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:33:14.517372    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:33:14.679483    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:33:15.002379    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:33:15.645958    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:33:16.929619    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:33:19.493246    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:33:24.615390    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p nospam-580000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-580000 unpause: exit status 80 (1m0.205091745s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-580000 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: docker: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format=<no value>: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:184: "out/minikube-darwin-amd64 -p nospam-580000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-580000 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (173.30s)

                                                
                                    
x
+
TestErrorSpam/stop (155.81s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-580000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-580000 stop
E0926 17:33:34.858990    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-580000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-580000 stop: (5.396228762s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-580000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-580000 stop
E0926 17:33:55.340985    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:34:36.304144    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-580000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-580000 stop: (1m15.208501044s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-580000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-580000 stop
E0926 17:35:58.227380    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-amd64 -p nospam-580000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-580000 stop: (1m15.205783682s)
--- PASS: TestErrorSpam/stop (155.81s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19711-1128/.minikube/files/etc/test/nested/copy/1679/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (167.96s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-748000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit 
E0926 17:38:14.349350    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:38:42.070023    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-darwin-amd64 start -p functional-748000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit : (2m47.954972807s)
--- PASS: TestFunctional/serial/StartWithProxy (167.96s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.94s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0926 17:38:58.488949    1679 config.go:182] Loaded profile config "functional-748000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-748000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-darwin-amd64 start -p functional-748000 --alsologtostderr -v=8: (38.942429914s)
functional_test.go:663: soft start took 38.942966888s for "functional-748000" cluster.
I0926 17:39:37.431708    1679 config.go:182] Loaded profile config "functional-748000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (38.94s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-748000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.98s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-amd64 -p functional-748000 cache add registry.k8s.io/pause:3.1: (1.11160696s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.98s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-748000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialCacheCmdcacheadd_local1284190819/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 cache add minikube-local-cache-test:functional-748000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 cache delete minikube-local-cache-test:functional-748000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-748000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-748000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (142.615928ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 kubectl -- --context functional-748000 get pods
functional_test.go:716: (dbg) Done: out/minikube-darwin-amd64 -p functional-748000 kubectl -- --context functional-748000 get pods: (1.211957038s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.21s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.6s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-748000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-748000 get pods: (1.59696666s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.60s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.79s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-748000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-darwin-amd64 start -p functional-748000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.784740987s)
functional_test.go:761: restart took 37.784912765s for "functional-748000" cluster.
I0926 17:40:23.997671    1679 config.go:182] Loaded profile config "functional-748000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (37.79s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-748000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.59s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 logs
functional_test.go:1236: (dbg) Done: out/minikube-darwin-amd64 -p functional-748000 logs: (2.593997934s)
--- PASS: TestFunctional/serial/LogsCmd (2.59s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.79s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd2855366454/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-darwin-amd64 -p functional-748000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd2855366454/001/logs.txt: (2.785177216s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.79s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-748000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-748000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-748000: exit status 115 (264.238145ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|--------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL            |
	|-----------|-------------|-------------|--------------------------|
	| default   | invalid-svc |          80 | http://192.169.0.4:30589 |
	|-----------|-------------|-------------|--------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-748000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.00s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-748000 config get cpus: exit status 14 (74.460903ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-748000 config get cpus: exit status 14 (57.104327ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-748000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-748000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 3439: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.58s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-748000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-748000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (542.68684ms)

                                                
                                                
-- stdout --
	* [functional-748000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1128/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1128/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 17:41:36.257683    3394 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:41:36.258340    3394 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:41:36.258348    3394 out.go:358] Setting ErrFile to fd 2...
	I0926 17:41:36.258354    3394 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:41:36.258936    3394 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1128/.minikube/bin
	I0926 17:41:36.260497    3394 out.go:352] Setting JSON to false
	I0926 17:41:36.283085    3394 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2466,"bootTime":1727395230,"procs":486,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0926 17:41:36.283222    3394 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 17:41:36.304627    3394 out.go:177] * [functional-748000] minikube v1.34.0 on Darwin 14.6.1
	I0926 17:41:36.347128    3394 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 17:41:36.347193    3394 notify.go:220] Checking for updates...
	I0926 17:41:36.389143    3394 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1128/kubeconfig
	I0926 17:41:36.410262    3394 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0926 17:41:36.431262    3394 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 17:41:36.452379    3394 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1128/.minikube
	I0926 17:41:36.472971    3394 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 17:41:36.495051    3394 config.go:182] Loaded profile config "functional-748000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:41:36.495654    3394 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:41:36.495705    3394 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:41:36.505148    3394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50916
	I0926 17:41:36.505536    3394 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:41:36.505936    3394 main.go:141] libmachine: Using API Version  1
	I0926 17:41:36.505962    3394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:41:36.506229    3394 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:41:36.506362    3394 main.go:141] libmachine: (functional-748000) Calling .DriverName
	I0926 17:41:36.506565    3394 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 17:41:36.506837    3394 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:41:36.506867    3394 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:41:36.515480    3394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50918
	I0926 17:41:36.515866    3394 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:41:36.516211    3394 main.go:141] libmachine: Using API Version  1
	I0926 17:41:36.516226    3394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:41:36.516448    3394 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:41:36.516553    3394 main.go:141] libmachine: (functional-748000) Calling .DriverName
	I0926 17:41:36.561194    3394 out.go:177] * Using the hyperkit driver based on existing profile
	I0926 17:41:36.603165    3394 start.go:297] selected driver: hyperkit
	I0926 17:41:36.603187    3394 start.go:901] validating driver "hyperkit" against &{Name:functional-748000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.1 ClusterName:functional-748000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 17:41:36.603349    3394 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 17:41:36.627352    3394 out.go:201] 
	W0926 17:41:36.648066    3394 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0926 17:41:36.685271    3394 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-748000 --dry-run --alsologtostderr -v=1 --driver=hyperkit 
--- PASS: TestFunctional/parallel/DryRun (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-748000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-748000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (486.658043ms)

                                                
                                                
-- stdout --
	* [functional-748000] minikube v1.34.0 sur Darwin 14.6.1
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1128/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1128/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote hyperkit basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 17:41:37.246484    3410 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:41:37.246632    3410 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:41:37.246637    3410 out.go:358] Setting ErrFile to fd 2...
	I0926 17:41:37.246640    3410 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:41:37.246819    3410 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1128/.minikube/bin
	I0926 17:41:37.248502    3410 out.go:352] Setting JSON to false
	I0926 17:41:37.271501    3410 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2467,"bootTime":1727395230,"procs":484,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0926 17:41:37.271661    3410 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 17:41:37.293246    3410 out.go:177] * [functional-748000] minikube v1.34.0 sur Darwin 14.6.1
	I0926 17:41:37.335337    3410 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 17:41:37.335383    3410 notify.go:220] Checking for updates...
	I0926 17:41:37.377114    3410 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1128/kubeconfig
	I0926 17:41:37.398394    3410 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0926 17:41:37.419131    3410 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 17:41:37.440409    3410 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1128/.minikube
	I0926 17:41:37.461329    3410 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 17:41:37.482923    3410 config.go:182] Loaded profile config "functional-748000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:41:37.483678    3410 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:41:37.483752    3410 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:41:37.493351    3410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50926
	I0926 17:41:37.493742    3410 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:41:37.494145    3410 main.go:141] libmachine: Using API Version  1
	I0926 17:41:37.494156    3410 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:41:37.494390    3410 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:41:37.494539    3410 main.go:141] libmachine: (functional-748000) Calling .DriverName
	I0926 17:41:37.494733    3410 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 17:41:37.494989    3410 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:41:37.495022    3410 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:41:37.503498    3410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50928
	I0926 17:41:37.503849    3410 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:41:37.504207    3410 main.go:141] libmachine: Using API Version  1
	I0926 17:41:37.504230    3410 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:41:37.504438    3410 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:41:37.504542    3410 main.go:141] libmachine: (functional-748000) Calling .DriverName
	I0926 17:41:37.533143    3410 out.go:177] * Utilisation du pilote hyperkit basé sur le profil existant
	I0926 17:41:37.575184    3410 start.go:297] selected driver: hyperkit
	I0926 17:41:37.575213    3410 start.go:901] validating driver "hyperkit" against &{Name:functional-748000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.1 ClusterName:functional-748000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 17:41:37.575430    3410 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 17:41:37.600277    3410 out.go:201] 
	W0926 17:41:37.621200    3410 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0926 17:41:37.641864    3410 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-748000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-748000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-msz6p" [33257118-7a7b-4e89-82bc-d5e5b9338283] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-msz6p" [33257118-7a7b-4e89-82bc-d5e5b9338283] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.004151629s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.169.0.4:32568
functional_test.go:1675: http://192.169.0.4:32568: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-msz6p

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.169.0.4:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.169.0.4:32568
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.57s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (32.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [fcb77a28-5cc7-4727-bd96-f8f71cdf007c] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005785346s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-748000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-748000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-748000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-748000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [146725d5-c543-4708-a480-7d1a42279d58] Pending
helpers_test.go:344: "sp-pod" [146725d5-c543-4708-a480-7d1a42279d58] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [146725d5-c543-4708-a480-7d1a42279d58] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.003190637s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-748000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-748000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-748000 delete -f testdata/storage-provisioner/pod.yaml: (1.585622511s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-748000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [15b0a80e-4c6f-4118-a9f2-96e9befebed9] Pending
helpers_test.go:344: "sp-pod" [15b0a80e-4c6f-4118-a9f2-96e9befebed9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [15b0a80e-4c6f-4118-a9f2-96e9befebed9] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003386844s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-748000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (32.16s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 ssh -n functional-748000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 cp functional-748000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelCpCmd154935809/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 ssh -n functional-748000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 ssh -n functional-748000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (29.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-748000 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-vcgpg" [b5686721-0298-429d-aea3-a06bfccfffc6] Pending
helpers_test.go:344: "mysql-6cdb49bbb-vcgpg" [b5686721-0298-429d-aea3-a06bfccfffc6] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-vcgpg" [b5686721-0298-429d-aea3-a06bfccfffc6] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.003956024s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-748000 exec mysql-6cdb49bbb-vcgpg -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-748000 exec mysql-6cdb49bbb-vcgpg -- mysql -ppassword -e "show databases;": exit status 1 (154.786253ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0926 17:40:59.938147    1679 retry.go:31] will retry after 814.055824ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-748000 exec mysql-6cdb49bbb-vcgpg -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-748000 exec mysql-6cdb49bbb-vcgpg -- mysql -ppassword -e "show databases;": exit status 1 (139.64163ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0926 17:41:00.892912    1679 retry.go:31] will retry after 1.623563969s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-748000 exec mysql-6cdb49bbb-vcgpg -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-748000 exec mysql-6cdb49bbb-vcgpg -- mysql -ppassword -e "show databases;": exit status 1 (103.346612ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0926 17:41:02.622153    1679 retry.go:31] will retry after 3.268543832s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-748000 exec mysql-6cdb49bbb-vcgpg -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (29.40s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1679/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 ssh "sudo cat /etc/test/nested/copy/1679/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1679.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 ssh "sudo cat /etc/ssl/certs/1679.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1679.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 ssh "sudo cat /usr/share/ca-certificates/1679.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/16792.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 ssh "sudo cat /etc/ssl/certs/16792.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/16792.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 ssh "sudo cat /usr/share/ca-certificates/16792.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-748000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-748000 ssh "sudo systemctl is-active crio": exit status 1 (172.808947ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-748000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-748000
docker.io/kicbase/echo-server:functional-748000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-748000 image ls --format short --alsologtostderr:
I0926 17:41:39.569046    3444 out.go:345] Setting OutFile to fd 1 ...
I0926 17:41:39.569339    3444 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0926 17:41:39.569345    3444 out.go:358] Setting ErrFile to fd 2...
I0926 17:41:39.569348    3444 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0926 17:41:39.569555    3444 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1128/.minikube/bin
I0926 17:41:39.570320    3444 config.go:182] Loaded profile config "functional-748000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0926 17:41:39.570419    3444 config.go:182] Loaded profile config "functional-748000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0926 17:41:39.570792    3444 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0926 17:41:39.570837    3444 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0926 17:41:39.579521    3444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50982
I0926 17:41:39.579983    3444 main.go:141] libmachine: () Calling .GetVersion
I0926 17:41:39.580413    3444 main.go:141] libmachine: Using API Version  1
I0926 17:41:39.580422    3444 main.go:141] libmachine: () Calling .SetConfigRaw
I0926 17:41:39.580628    3444 main.go:141] libmachine: () Calling .GetMachineName
I0926 17:41:39.580735    3444 main.go:141] libmachine: (functional-748000) Calling .GetState
I0926 17:41:39.580823    3444 main.go:141] libmachine: (functional-748000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0926 17:41:39.580904    3444 main.go:141] libmachine: (functional-748000) DBG | hyperkit pid from json: 2691
I0926 17:41:39.582249    3444 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0926 17:41:39.582280    3444 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0926 17:41:39.591012    3444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50984
I0926 17:41:39.591400    3444 main.go:141] libmachine: () Calling .GetVersion
I0926 17:41:39.591742    3444 main.go:141] libmachine: Using API Version  1
I0926 17:41:39.591755    3444 main.go:141] libmachine: () Calling .SetConfigRaw
I0926 17:41:39.591987    3444 main.go:141] libmachine: () Calling .GetMachineName
I0926 17:41:39.592093    3444 main.go:141] libmachine: (functional-748000) Calling .DriverName
I0926 17:41:39.592278    3444 ssh_runner.go:195] Run: systemctl --version
I0926 17:41:39.592296    3444 main.go:141] libmachine: (functional-748000) Calling .GetSSHHostname
I0926 17:41:39.592376    3444 main.go:141] libmachine: (functional-748000) Calling .GetSSHPort
I0926 17:41:39.592455    3444 main.go:141] libmachine: (functional-748000) Calling .GetSSHKeyPath
I0926 17:41:39.592542    3444 main.go:141] libmachine: (functional-748000) Calling .GetSSHUsername
I0926 17:41:39.592626    3444 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/functional-748000/id_rsa Username:docker}
I0926 17:41:39.624084    3444 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0926 17:41:39.639729    3444 main.go:141] libmachine: Making call to close driver server
I0926 17:41:39.639738    3444 main.go:141] libmachine: (functional-748000) Calling .Close
I0926 17:41:39.639893    3444 main.go:141] libmachine: Successfully made call to close driver server
I0926 17:41:39.639905    3444 main.go:141] libmachine: Making call to close connection to plugin binary
I0926 17:41:39.639912    3444 main.go:141] libmachine: Making call to close driver server
I0926 17:41:39.639914    3444 main.go:141] libmachine: (functional-748000) DBG | Closing plugin on server side
I0926 17:41:39.639920    3444 main.go:141] libmachine: (functional-748000) Calling .Close
I0926 17:41:39.640101    3444 main.go:141] libmachine: Successfully made call to close driver server
I0926 17:41:39.640112    3444 main.go:141] libmachine: Making call to close connection to plugin binary
I0926 17:41:39.640129    3444 main.go:141] libmachine: (functional-748000) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-748000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-proxy                  | v1.31.1           | 60c005f310ff3 | 91.5MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| docker.io/library/minikube-local-cache-test | functional-748000 | cafc4905aea09 | 30B    |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 175ffd71cce3d | 88.4MB |
| docker.io/library/nginx                     | alpine            | c7b4f26a7d93f | 43.2MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 9aa1fad941575 | 67.4MB |
| docker.io/library/nginx                     | latest            | 39286ab8a5e14 | 188MB  |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| docker.io/kicbase/echo-server               | functional-748000 | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| localhost/my-image                          | functional-748000 | bb69afa77379d | 1.24MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/kube-apiserver              | v1.31.1           | 6bab7719df100 | 94.2MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-748000 image ls --format table --alsologtostderr:
I0926 17:41:42.512529    3469 out.go:345] Setting OutFile to fd 1 ...
I0926 17:41:42.512726    3469 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0926 17:41:42.512732    3469 out.go:358] Setting ErrFile to fd 2...
I0926 17:41:42.512736    3469 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0926 17:41:42.512915    3469 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1128/.minikube/bin
I0926 17:41:42.513541    3469 config.go:182] Loaded profile config "functional-748000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0926 17:41:42.513637    3469 config.go:182] Loaded profile config "functional-748000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0926 17:41:42.514004    3469 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0926 17:41:42.514046    3469 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0926 17:41:42.522385    3469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51018
I0926 17:41:42.522802    3469 main.go:141] libmachine: () Calling .GetVersion
I0926 17:41:42.523207    3469 main.go:141] libmachine: Using API Version  1
I0926 17:41:42.523243    3469 main.go:141] libmachine: () Calling .SetConfigRaw
I0926 17:41:42.523450    3469 main.go:141] libmachine: () Calling .GetMachineName
I0926 17:41:42.523560    3469 main.go:141] libmachine: (functional-748000) Calling .GetState
I0926 17:41:42.523650    3469 main.go:141] libmachine: (functional-748000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0926 17:41:42.523718    3469 main.go:141] libmachine: (functional-748000) DBG | hyperkit pid from json: 2691
I0926 17:41:42.525004    3469 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0926 17:41:42.525026    3469 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0926 17:41:42.533730    3469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51020
I0926 17:41:42.534120    3469 main.go:141] libmachine: () Calling .GetVersion
I0926 17:41:42.534468    3469 main.go:141] libmachine: Using API Version  1
I0926 17:41:42.534481    3469 main.go:141] libmachine: () Calling .SetConfigRaw
I0926 17:41:42.534695    3469 main.go:141] libmachine: () Calling .GetMachineName
I0926 17:41:42.534805    3469 main.go:141] libmachine: (functional-748000) Calling .DriverName
I0926 17:41:42.534965    3469 ssh_runner.go:195] Run: systemctl --version
I0926 17:41:42.534984    3469 main.go:141] libmachine: (functional-748000) Calling .GetSSHHostname
I0926 17:41:42.535075    3469 main.go:141] libmachine: (functional-748000) Calling .GetSSHPort
I0926 17:41:42.535145    3469 main.go:141] libmachine: (functional-748000) Calling .GetSSHKeyPath
I0926 17:41:42.535225    3469 main.go:141] libmachine: (functional-748000) Calling .GetSSHUsername
I0926 17:41:42.535315    3469 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/functional-748000/id_rsa Username:docker}
I0926 17:41:42.573067    3469 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0926 17:41:42.592431    3469 main.go:141] libmachine: Making call to close driver server
I0926 17:41:42.592441    3469 main.go:141] libmachine: (functional-748000) Calling .Close
I0926 17:41:42.592611    3469 main.go:141] libmachine: Successfully made call to close driver server
I0926 17:41:42.592611    3469 main.go:141] libmachine: (functional-748000) DBG | Closing plugin on server side
I0926 17:41:42.592619    3469 main.go:141] libmachine: Making call to close connection to plugin binary
I0926 17:41:42.592626    3469 main.go:141] libmachine: Making call to close driver server
I0926 17:41:42.592630    3469 main.go:141] libmachine: (functional-748000) Calling .Close
I0926 17:41:42.592785    3469 main.go:141] libmachine: Successfully made call to close driver server
I0926 17:41:42.592789    3469 main.go:141] libmachine: (functional-748000) DBG | Closing plugin on server side
I0926 17:41:42.592796    3469 main.go:141] libmachine: Making call to close connection to plugin binary
2024/09/26 17:41:52 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-748000 image ls --format json --alsologtostderr:
[{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"94200000"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67400000"},{"id":"c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43200000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"88400000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c0
20289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"bb69afa77379da7ebffbef2d0b2ee05b9884479b46aab24ace777fdc2d36f368","repoDigests":[],"repoTags":["localhost/my-image:functional-748000"],"size":"1240000"},{"id":"cafc4905aea09f6d055a633713b4bf55c154387ffee1c19e98fee7e205a6c6f6","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-748000"],"size":"30"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-ser
ver:functional-748000"],"size":"4940000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"91500000"},{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"6e38f40d62
8db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-748000 image ls --format json --alsologtostderr:
I0926 17:41:42.354932    3465 out.go:345] Setting OutFile to fd 1 ...
I0926 17:41:42.355133    3465 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0926 17:41:42.355138    3465 out.go:358] Setting ErrFile to fd 2...
I0926 17:41:42.355142    3465 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0926 17:41:42.355319    3465 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1128/.minikube/bin
I0926 17:41:42.356029    3465 config.go:182] Loaded profile config "functional-748000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0926 17:41:42.356133    3465 config.go:182] Loaded profile config "functional-748000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0926 17:41:42.356519    3465 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0926 17:41:42.356562    3465 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0926 17:41:42.365202    3465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51013
I0926 17:41:42.365630    3465 main.go:141] libmachine: () Calling .GetVersion
I0926 17:41:42.366059    3465 main.go:141] libmachine: Using API Version  1
I0926 17:41:42.366069    3465 main.go:141] libmachine: () Calling .SetConfigRaw
I0926 17:41:42.366274    3465 main.go:141] libmachine: () Calling .GetMachineName
I0926 17:41:42.366382    3465 main.go:141] libmachine: (functional-748000) Calling .GetState
I0926 17:41:42.366474    3465 main.go:141] libmachine: (functional-748000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0926 17:41:42.366541    3465 main.go:141] libmachine: (functional-748000) DBG | hyperkit pid from json: 2691
I0926 17:41:42.367842    3465 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0926 17:41:42.367863    3465 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0926 17:41:42.376054    3465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51015
I0926 17:41:42.376403    3465 main.go:141] libmachine: () Calling .GetVersion
I0926 17:41:42.376772    3465 main.go:141] libmachine: Using API Version  1
I0926 17:41:42.376798    3465 main.go:141] libmachine: () Calling .SetConfigRaw
I0926 17:41:42.377064    3465 main.go:141] libmachine: () Calling .GetMachineName
I0926 17:41:42.377201    3465 main.go:141] libmachine: (functional-748000) Calling .DriverName
I0926 17:41:42.377373    3465 ssh_runner.go:195] Run: systemctl --version
I0926 17:41:42.377391    3465 main.go:141] libmachine: (functional-748000) Calling .GetSSHHostname
I0926 17:41:42.377489    3465 main.go:141] libmachine: (functional-748000) Calling .GetSSHPort
I0926 17:41:42.377581    3465 main.go:141] libmachine: (functional-748000) Calling .GetSSHKeyPath
I0926 17:41:42.377671    3465 main.go:141] libmachine: (functional-748000) Calling .GetSSHUsername
I0926 17:41:42.377764    3465 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/functional-748000/id_rsa Username:docker}
I0926 17:41:42.409587    3465 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0926 17:41:42.429590    3465 main.go:141] libmachine: Making call to close driver server
I0926 17:41:42.429600    3465 main.go:141] libmachine: (functional-748000) Calling .Close
I0926 17:41:42.429742    3465 main.go:141] libmachine: Successfully made call to close driver server
I0926 17:41:42.429752    3465 main.go:141] libmachine: Making call to close connection to plugin binary
I0926 17:41:42.429763    3465 main.go:141] libmachine: Making call to close driver server
I0926 17:41:42.429770    3465 main.go:141] libmachine: (functional-748000) Calling .Close
I0926 17:41:42.429922    3465 main.go:141] libmachine: (functional-748000) DBG | Closing plugin on server side
I0926 17:41:42.429923    3465 main.go:141] libmachine: Successfully made call to close driver server
I0926 17:41:42.429934    3465 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-748000 image ls --format yaml --alsologtostderr:
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "91500000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-748000
size: "4940000"
- id: cafc4905aea09f6d055a633713b4bf55c154387ffee1c19e98fee7e205a6c6f6
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-748000
size: "30"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67400000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "94200000"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "88400000"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-748000 image ls --format yaml --alsologtostderr:
I0926 17:41:39.722149    3448 out.go:345] Setting OutFile to fd 1 ...
I0926 17:41:39.722418    3448 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0926 17:41:39.722424    3448 out.go:358] Setting ErrFile to fd 2...
I0926 17:41:39.722428    3448 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0926 17:41:39.722616    3448 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1128/.minikube/bin
I0926 17:41:39.723280    3448 config.go:182] Loaded profile config "functional-748000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0926 17:41:39.723380    3448 config.go:182] Loaded profile config "functional-748000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0926 17:41:39.723790    3448 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0926 17:41:39.723843    3448 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0926 17:41:39.732108    3448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50988
I0926 17:41:39.732528    3448 main.go:141] libmachine: () Calling .GetVersion
I0926 17:41:39.732952    3448 main.go:141] libmachine: Using API Version  1
I0926 17:41:39.732983    3448 main.go:141] libmachine: () Calling .SetConfigRaw
I0926 17:41:39.733235    3448 main.go:141] libmachine: () Calling .GetMachineName
I0926 17:41:39.733357    3448 main.go:141] libmachine: (functional-748000) Calling .GetState
I0926 17:41:39.733446    3448 main.go:141] libmachine: (functional-748000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0926 17:41:39.733507    3448 main.go:141] libmachine: (functional-748000) DBG | hyperkit pid from json: 2691
I0926 17:41:39.734856    3448 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0926 17:41:39.734882    3448 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0926 17:41:39.743275    3448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50990
I0926 17:41:39.743615    3448 main.go:141] libmachine: () Calling .GetVersion
I0926 17:41:39.743994    3448 main.go:141] libmachine: Using API Version  1
I0926 17:41:39.744019    3448 main.go:141] libmachine: () Calling .SetConfigRaw
I0926 17:41:39.744223    3448 main.go:141] libmachine: () Calling .GetMachineName
I0926 17:41:39.744336    3448 main.go:141] libmachine: (functional-748000) Calling .DriverName
I0926 17:41:39.744504    3448 ssh_runner.go:195] Run: systemctl --version
I0926 17:41:39.744525    3448 main.go:141] libmachine: (functional-748000) Calling .GetSSHHostname
I0926 17:41:39.744606    3448 main.go:141] libmachine: (functional-748000) Calling .GetSSHPort
I0926 17:41:39.744682    3448 main.go:141] libmachine: (functional-748000) Calling .GetSSHKeyPath
I0926 17:41:39.744768    3448 main.go:141] libmachine: (functional-748000) Calling .GetSSHUsername
I0926 17:41:39.744844    3448 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/functional-748000/id_rsa Username:docker}
I0926 17:41:39.775785    3448 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0926 17:41:39.792417    3448 main.go:141] libmachine: Making call to close driver server
I0926 17:41:39.792426    3448 main.go:141] libmachine: (functional-748000) Calling .Close
I0926 17:41:39.792587    3448 main.go:141] libmachine: (functional-748000) DBG | Closing plugin on server side
I0926 17:41:39.792606    3448 main.go:141] libmachine: Successfully made call to close driver server
I0926 17:41:39.792616    3448 main.go:141] libmachine: Making call to close connection to plugin binary
I0926 17:41:39.792623    3448 main.go:141] libmachine: Making call to close driver server
I0926 17:41:39.792631    3448 main.go:141] libmachine: (functional-748000) Calling .Close
I0926 17:41:39.792768    3448 main.go:141] libmachine: Successfully made call to close driver server
I0926 17:41:39.792776    3448 main.go:141] libmachine: Making call to close connection to plugin binary
I0926 17:41:39.792796    3448 main.go:141] libmachine: (functional-748000) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-748000 ssh pgrep buildkitd: exit status 1 (126.858267ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 image build -t localhost/my-image:functional-748000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-amd64 -p functional-748000 image build -t localhost/my-image:functional-748000 testdata/build --alsologtostderr: (2.198146302s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-748000 image build -t localhost/my-image:functional-748000 testdata/build --alsologtostderr:
I0926 17:41:39.999603    3457 out.go:345] Setting OutFile to fd 1 ...
I0926 17:41:40.000435    3457 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0926 17:41:40.000444    3457 out.go:358] Setting ErrFile to fd 2...
I0926 17:41:40.000450    3457 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0926 17:41:40.001023    3457 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1128/.minikube/bin
I0926 17:41:40.001661    3457 config.go:182] Loaded profile config "functional-748000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0926 17:41:40.002785    3457 config.go:182] Loaded profile config "functional-748000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0926 17:41:40.003134    3457 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0926 17:41:40.003173    3457 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0926 17:41:40.011499    3457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51000
I0926 17:41:40.011928    3457 main.go:141] libmachine: () Calling .GetVersion
I0926 17:41:40.012334    3457 main.go:141] libmachine: Using API Version  1
I0926 17:41:40.012352    3457 main.go:141] libmachine: () Calling .SetConfigRaw
I0926 17:41:40.012562    3457 main.go:141] libmachine: () Calling .GetMachineName
I0926 17:41:40.012674    3457 main.go:141] libmachine: (functional-748000) Calling .GetState
I0926 17:41:40.012764    3457 main.go:141] libmachine: (functional-748000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0926 17:41:40.012830    3457 main.go:141] libmachine: (functional-748000) DBG | hyperkit pid from json: 2691
I0926 17:41:40.014144    3457 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0926 17:41:40.014169    3457 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0926 17:41:40.022406    3457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51002
I0926 17:41:40.022742    3457 main.go:141] libmachine: () Calling .GetVersion
I0926 17:41:40.023081    3457 main.go:141] libmachine: Using API Version  1
I0926 17:41:40.023095    3457 main.go:141] libmachine: () Calling .SetConfigRaw
I0926 17:41:40.023316    3457 main.go:141] libmachine: () Calling .GetMachineName
I0926 17:41:40.023424    3457 main.go:141] libmachine: (functional-748000) Calling .DriverName
I0926 17:41:40.023580    3457 ssh_runner.go:195] Run: systemctl --version
I0926 17:41:40.023596    3457 main.go:141] libmachine: (functional-748000) Calling .GetSSHHostname
I0926 17:41:40.023672    3457 main.go:141] libmachine: (functional-748000) Calling .GetSSHPort
I0926 17:41:40.023749    3457 main.go:141] libmachine: (functional-748000) Calling .GetSSHKeyPath
I0926 17:41:40.023830    3457 main.go:141] libmachine: (functional-748000) Calling .GetSSHUsername
I0926 17:41:40.023917    3457 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/functional-748000/id_rsa Username:docker}
I0926 17:41:40.054411    3457 build_images.go:161] Building image from path: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.3481219507.tar
I0926 17:41:40.054494    3457 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0926 17:41:40.063758    3457 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3481219507.tar
I0926 17:41:40.066974    3457 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3481219507.tar: stat -c "%s %y" /var/lib/minikube/build/build.3481219507.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3481219507.tar': No such file or directory
I0926 17:41:40.067007    3457 ssh_runner.go:362] scp /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.3481219507.tar --> /var/lib/minikube/build/build.3481219507.tar (3072 bytes)
I0926 17:41:40.086872    3457 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3481219507
I0926 17:41:40.094915    3457 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3481219507 -xf /var/lib/minikube/build/build.3481219507.tar
I0926 17:41:40.102881    3457 docker.go:360] Building image: /var/lib/minikube/build/build.3481219507
I0926 17:41:40.102976    3457 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-748000 /var/lib/minikube/build/build.3481219507
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.6s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.7s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:bb69afa77379da7ebffbef2d0b2ee05b9884479b46aab24ace777fdc2d36f368 done
#8 naming to localhost/my-image:functional-748000 done
#8 DONE 0.0s
I0926 17:41:42.096110    3457 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-748000 /var/lib/minikube/build/build.3481219507: (1.993121096s)
I0926 17:41:42.096177    3457 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3481219507
I0926 17:41:42.105008    3457 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3481219507.tar
I0926 17:41:42.113632    3457 build_images.go:217] Built localhost/my-image:functional-748000 from /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.3481219507.tar
I0926 17:41:42.113657    3457 build_images.go:133] succeeded building to: functional-748000
I0926 17:41:42.113660    3457 build_images.go:134] failed building to: 
I0926 17:41:42.113673    3457 main.go:141] libmachine: Making call to close driver server
I0926 17:41:42.113679    3457 main.go:141] libmachine: (functional-748000) Calling .Close
I0926 17:41:42.113827    3457 main.go:141] libmachine: Successfully made call to close driver server
I0926 17:41:42.113837    3457 main.go:141] libmachine: Making call to close connection to plugin binary
I0926 17:41:42.113843    3457 main.go:141] libmachine: Making call to close driver server
I0926 17:41:42.113849    3457 main.go:141] libmachine: (functional-748000) Calling .Close
I0926 17:41:42.113879    3457 main.go:141] libmachine: (functional-748000) DBG | Closing plugin on server side
I0926 17:41:42.113982    3457 main.go:141] libmachine: Successfully made call to close driver server
I0926 17:41:42.113994    3457 main.go:141] libmachine: Making call to close connection to plugin binary
I0926 17:41:42.114003    3457 main.go:141] libmachine: (functional-748000) DBG | Closing plugin on server side
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.869048989s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-748000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-748000 docker-env) && out/minikube-darwin-amd64 status -p functional-748000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-748000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 image load --daemon kicbase/echo-server:functional-748000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 image load --daemon kicbase/echo-server:functional-748000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-748000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 image load --daemon kicbase/echo-server:functional-748000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 image save kicbase/echo-server:functional-748000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 image rm kicbase/echo-server:functional-748000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-748000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 image save --daemon kicbase/echo-server:functional-748000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-748000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (23.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-748000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-748000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-9znkp" [46a0c9ca-8ead-4054-951b-b8261e28b3d6] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-9znkp" [46a0c9ca-8ead-4054-951b-b8261e28b3d6] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 23.004670921s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (23.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 service list -o json
functional_test.go:1494: Took "207.47108ms" to run "out/minikube-darwin-amd64 -p functional-748000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.169.0.4:31682
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.169.0.4:31682
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-748000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-748000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-748000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 3167: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-748000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-748000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-748000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [fd27d40e-438f-4166-8aac-67d16f11ec00] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [fd27d40e-438f-4166-8aac-67d16f11ec00] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003276478s
I0926 17:41:15.907555    1679 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-748000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.93.202 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
I0926 17:41:15.989552    1679 config.go:182] Loaded profile config "functional-748000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
I0926 17:41:16.053841    1679 config.go:182] Loaded profile config "functional-748000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-748000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1315: Took "213.82104ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1329: Took "79.621646ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1366: Took "207.521453ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1379: Took "79.734352ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-748000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3733729316/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727397685386247000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3733729316/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727397685386247000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3733729316/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727397685386247000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3733729316/001/test-1727397685386247000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-748000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (152.802664ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0926 17:41:25.539849    1679 retry.go:31] will retry after 490.845379ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 27 00:41 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 27 00:41 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 27 00:41 test-1727397685386247000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 ssh cat /mount-9p/test-1727397685386247000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-748000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [8e5b90e7-c2ac-49e1-a33c-72e17036267a] Pending
helpers_test.go:344: "busybox-mount" [8e5b90e7-c2ac-49e1-a33c-72e17036267a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [8e5b90e7-c2ac-49e1-a33c-72e17036267a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [8e5b90e7-c2ac-49e1-a33c-72e17036267a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003423931s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-748000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-748000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3733729316/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.06s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-748000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port2298777503/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-748000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (157.13951ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0926 17:41:32.602262    1679 retry.go:31] will retry after 641.879221ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-748000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port2298777503/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-748000 ssh "sudo umount -f /mount-9p": exit status 1 (129.106717ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-748000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-748000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port2298777503/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-748000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1767137440/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-748000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1767137440/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-748000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1767137440/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-748000 ssh "findmnt -T" /mount1: exit status 1 (158.143089ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0926 17:41:34.280818    1679 retry.go:31] will retry after 291.80547ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-748000 ssh "findmnt -T" /mount1: exit status 1 (229.41237ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0926 17:41:34.802387    1679 retry.go:31] will retry after 903.502722ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-748000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-748000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-748000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1767137440/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-748000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1767137440/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-748000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1767137440/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.09s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-748000
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-748000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-748000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (201.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-476000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit 
E0926 17:43:14.347085    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p ha-476000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit : (3m21.469233522s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (201.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-476000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-476000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-amd64 kubectl -p ha-476000 -- rollout status deployment/busybox: (3.341100233s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-476000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-476000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-476000 -- exec busybox-7dff88458-bvjrf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-476000 -- exec busybox-7dff88458-gvp8q -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-476000 -- exec busybox-7dff88458-jgndj -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-476000 -- exec busybox-7dff88458-bvjrf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-476000 -- exec busybox-7dff88458-gvp8q -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-476000 -- exec busybox-7dff88458-jgndj -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-476000 -- exec busybox-7dff88458-bvjrf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-476000 -- exec busybox-7dff88458-gvp8q -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-476000 -- exec busybox-7dff88458-jgndj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-476000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-476000 -- exec busybox-7dff88458-bvjrf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-476000 -- exec busybox-7dff88458-bvjrf -- sh -c "ping -c 1 192.169.0.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-476000 -- exec busybox-7dff88458-gvp8q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-476000 -- exec busybox-7dff88458-gvp8q -- sh -c "ping -c 1 192.169.0.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-476000 -- exec busybox-7dff88458-jgndj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-476000 -- exec busybox-7dff88458-jgndj -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (52.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-476000 -v=7 --alsologtostderr
E0926 17:45:36.780496    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/functional-748000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:45:36.787405    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/functional-748000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:45:36.799865    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/functional-748000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:45:36.822571    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/functional-748000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:45:36.865687    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/functional-748000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:45:36.948283    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/functional-748000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:45:37.111106    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/functional-748000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:45:37.433417    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/functional-748000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:45:38.075943    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/functional-748000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:45:39.358367    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/functional-748000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:45:41.919937    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/functional-748000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:45:47.108372    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/functional-748000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:45:57.350994    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/functional-748000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:46:17.833292    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/functional-748000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-476000 -v=7 --alsologtostderr: (52.337067093s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (52.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-476000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (9.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 cp testdata/cp-test.txt ha-476000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 ssh -n ha-476000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 cp ha-476000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile3898402723/001/cp-test_ha-476000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 ssh -n ha-476000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 cp ha-476000:/home/docker/cp-test.txt ha-476000-m02:/home/docker/cp-test_ha-476000_ha-476000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 ssh -n ha-476000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 ssh -n ha-476000-m02 "sudo cat /home/docker/cp-test_ha-476000_ha-476000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 cp ha-476000:/home/docker/cp-test.txt ha-476000-m03:/home/docker/cp-test_ha-476000_ha-476000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 ssh -n ha-476000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 ssh -n ha-476000-m03 "sudo cat /home/docker/cp-test_ha-476000_ha-476000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 cp ha-476000:/home/docker/cp-test.txt ha-476000-m04:/home/docker/cp-test_ha-476000_ha-476000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 ssh -n ha-476000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 ssh -n ha-476000-m04 "sudo cat /home/docker/cp-test_ha-476000_ha-476000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 cp testdata/cp-test.txt ha-476000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 ssh -n ha-476000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 cp ha-476000-m02:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile3898402723/001/cp-test_ha-476000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 ssh -n ha-476000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 cp ha-476000-m02:/home/docker/cp-test.txt ha-476000:/home/docker/cp-test_ha-476000-m02_ha-476000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 ssh -n ha-476000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 ssh -n ha-476000 "sudo cat /home/docker/cp-test_ha-476000-m02_ha-476000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 cp ha-476000-m02:/home/docker/cp-test.txt ha-476000-m03:/home/docker/cp-test_ha-476000-m02_ha-476000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 ssh -n ha-476000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 ssh -n ha-476000-m03 "sudo cat /home/docker/cp-test_ha-476000-m02_ha-476000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 cp ha-476000-m02:/home/docker/cp-test.txt ha-476000-m04:/home/docker/cp-test_ha-476000-m02_ha-476000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 ssh -n ha-476000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 ssh -n ha-476000-m04 "sudo cat /home/docker/cp-test_ha-476000-m02_ha-476000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 cp testdata/cp-test.txt ha-476000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 ssh -n ha-476000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 cp ha-476000-m03:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile3898402723/001/cp-test_ha-476000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 ssh -n ha-476000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 cp ha-476000-m03:/home/docker/cp-test.txt ha-476000:/home/docker/cp-test_ha-476000-m03_ha-476000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 ssh -n ha-476000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 ssh -n ha-476000 "sudo cat /home/docker/cp-test_ha-476000-m03_ha-476000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 cp ha-476000-m03:/home/docker/cp-test.txt ha-476000-m02:/home/docker/cp-test_ha-476000-m03_ha-476000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 ssh -n ha-476000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 ssh -n ha-476000-m02 "sudo cat /home/docker/cp-test_ha-476000-m03_ha-476000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 cp ha-476000-m03:/home/docker/cp-test.txt ha-476000-m04:/home/docker/cp-test_ha-476000-m03_ha-476000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 ssh -n ha-476000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 ssh -n ha-476000-m04 "sudo cat /home/docker/cp-test_ha-476000-m03_ha-476000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 cp testdata/cp-test.txt ha-476000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 ssh -n ha-476000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 cp ha-476000-m04:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile3898402723/001/cp-test_ha-476000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 ssh -n ha-476000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 cp ha-476000-m04:/home/docker/cp-test.txt ha-476000:/home/docker/cp-test_ha-476000-m04_ha-476000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 ssh -n ha-476000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 ssh -n ha-476000 "sudo cat /home/docker/cp-test_ha-476000-m04_ha-476000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 cp ha-476000-m04:/home/docker/cp-test.txt ha-476000-m02:/home/docker/cp-test_ha-476000-m04_ha-476000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 ssh -n ha-476000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 ssh -n ha-476000-m02 "sudo cat /home/docker/cp-test_ha-476000-m04_ha-476000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 cp ha-476000-m04:/home/docker/cp-test.txt ha-476000-m03:/home/docker/cp-test_ha-476000-m04_ha-476000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 ssh -n ha-476000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 ssh -n ha-476000-m03 "sudo cat /home/docker/cp-test_ha-476000-m04_ha-476000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (9.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (8.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-darwin-amd64 -p ha-476000 node stop m02 -v=7 --alsologtostderr: (8.339506253s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-476000 status -v=7 --alsologtostderr: exit status 7 (354.711818ms)

                                                
                                                
-- stdout --
	ha-476000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-476000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-476000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-476000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 17:46:38.266425    3973 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:46:38.266629    3973 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:46:38.266635    3973 out.go:358] Setting ErrFile to fd 2...
	I0926 17:46:38.266638    3973 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:46:38.266824    3973 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1128/.minikube/bin
	I0926 17:46:38.267007    3973 out.go:352] Setting JSON to false
	I0926 17:46:38.267030    3973 mustload.go:65] Loading cluster: ha-476000
	I0926 17:46:38.267069    3973 notify.go:220] Checking for updates...
	I0926 17:46:38.267379    3973 config.go:182] Loaded profile config "ha-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:46:38.267399    3973 status.go:174] checking status of ha-476000 ...
	I0926 17:46:38.267833    3973 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:46:38.267883    3973 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:46:38.277054    3973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51767
	I0926 17:46:38.277429    3973 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:46:38.277819    3973 main.go:141] libmachine: Using API Version  1
	I0926 17:46:38.277827    3973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:46:38.278035    3973 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:46:38.278141    3973 main.go:141] libmachine: (ha-476000) Calling .GetState
	I0926 17:46:38.278217    3973 main.go:141] libmachine: (ha-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:46:38.278302    3973 main.go:141] libmachine: (ha-476000) DBG | hyperkit pid from json: 3501
	I0926 17:46:38.279279    3973 status.go:364] ha-476000 host status = "Running" (err=<nil>)
	I0926 17:46:38.279297    3973 host.go:66] Checking if "ha-476000" exists ...
	I0926 17:46:38.279551    3973 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:46:38.279571    3973 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:46:38.288095    3973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51769
	I0926 17:46:38.288521    3973 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:46:38.288833    3973 main.go:141] libmachine: Using API Version  1
	I0926 17:46:38.288844    3973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:46:38.289073    3973 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:46:38.289180    3973 main.go:141] libmachine: (ha-476000) Calling .GetIP
	I0926 17:46:38.289268    3973 host.go:66] Checking if "ha-476000" exists ...
	I0926 17:46:38.289519    3973 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:46:38.289547    3973 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:46:38.301312    3973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51771
	I0926 17:46:38.301688    3973 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:46:38.302010    3973 main.go:141] libmachine: Using API Version  1
	I0926 17:46:38.302019    3973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:46:38.302206    3973 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:46:38.302313    3973 main.go:141] libmachine: (ha-476000) Calling .DriverName
	I0926 17:46:38.302450    3973 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0926 17:46:38.302469    3973 main.go:141] libmachine: (ha-476000) Calling .GetSSHHostname
	I0926 17:46:38.302551    3973 main.go:141] libmachine: (ha-476000) Calling .GetSSHPort
	I0926 17:46:38.302623    3973 main.go:141] libmachine: (ha-476000) Calling .GetSSHKeyPath
	I0926 17:46:38.302695    3973 main.go:141] libmachine: (ha-476000) Calling .GetSSHUsername
	I0926 17:46:38.302769    3973 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000/id_rsa Username:docker}
	I0926 17:46:38.337189    3973 ssh_runner.go:195] Run: systemctl --version
	I0926 17:46:38.344775    3973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 17:46:38.355736    3973 kubeconfig.go:125] found "ha-476000" server: "https://192.169.0.254:8443"
	I0926 17:46:38.355767    3973 api_server.go:166] Checking apiserver status ...
	I0926 17:46:38.355816    3973 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 17:46:38.369217    3973 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1988/cgroup
	W0926 17:46:38.376575    3973 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1988/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0926 17:46:38.376623    3973 ssh_runner.go:195] Run: ls
	I0926 17:46:38.379757    3973 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0926 17:46:38.382823    3973 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0926 17:46:38.382832    3973 status.go:456] ha-476000 apiserver status = Running (err=<nil>)
	I0926 17:46:38.382838    3973 status.go:176] ha-476000 status: &{Name:ha-476000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0926 17:46:38.382848    3973 status.go:174] checking status of ha-476000-m02 ...
	I0926 17:46:38.383109    3973 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:46:38.383130    3973 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:46:38.392239    3973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51775
	I0926 17:46:38.392595    3973 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:46:38.392945    3973 main.go:141] libmachine: Using API Version  1
	I0926 17:46:38.392963    3973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:46:38.393169    3973 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:46:38.393288    3973 main.go:141] libmachine: (ha-476000-m02) Calling .GetState
	I0926 17:46:38.393372    3973 main.go:141] libmachine: (ha-476000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:46:38.393449    3973 main.go:141] libmachine: (ha-476000-m02) DBG | hyperkit pid from json: 3516
	I0926 17:46:38.394452    3973 main.go:141] libmachine: (ha-476000-m02) DBG | hyperkit pid 3516 missing from process table
	I0926 17:46:38.394477    3973 status.go:364] ha-476000-m02 host status = "Stopped" (err=<nil>)
	I0926 17:46:38.394483    3973 status.go:377] host is not running, skipping remaining checks
	I0926 17:46:38.394487    3973 status.go:176] ha-476000-m02 status: &{Name:ha-476000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0926 17:46:38.394500    3973 status.go:174] checking status of ha-476000-m03 ...
	I0926 17:46:38.394783    3973 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:46:38.394807    3973 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:46:38.403674    3973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51777
	I0926 17:46:38.404020    3973 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:46:38.404364    3973 main.go:141] libmachine: Using API Version  1
	I0926 17:46:38.404384    3973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:46:38.404592    3973 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:46:38.404711    3973 main.go:141] libmachine: (ha-476000-m03) Calling .GetState
	I0926 17:46:38.404790    3973 main.go:141] libmachine: (ha-476000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:46:38.404867    3973 main.go:141] libmachine: (ha-476000-m03) DBG | hyperkit pid from json: 3537
	I0926 17:46:38.405868    3973 status.go:364] ha-476000-m03 host status = "Running" (err=<nil>)
	I0926 17:46:38.405877    3973 host.go:66] Checking if "ha-476000-m03" exists ...
	I0926 17:46:38.406159    3973 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:46:38.406185    3973 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:46:38.414752    3973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51779
	I0926 17:46:38.415103    3973 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:46:38.415417    3973 main.go:141] libmachine: Using API Version  1
	I0926 17:46:38.415428    3973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:46:38.415620    3973 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:46:38.415735    3973 main.go:141] libmachine: (ha-476000-m03) Calling .GetIP
	I0926 17:46:38.415832    3973 host.go:66] Checking if "ha-476000-m03" exists ...
	I0926 17:46:38.416104    3973 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:46:38.416127    3973 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:46:38.424596    3973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51781
	I0926 17:46:38.424989    3973 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:46:38.425328    3973 main.go:141] libmachine: Using API Version  1
	I0926 17:46:38.425339    3973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:46:38.425550    3973 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:46:38.425676    3973 main.go:141] libmachine: (ha-476000-m03) Calling .DriverName
	I0926 17:46:38.425811    3973 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0926 17:46:38.425822    3973 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHHostname
	I0926 17:46:38.425908    3973 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHPort
	I0926 17:46:38.425985    3973 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHKeyPath
	I0926 17:46:38.426066    3973 main.go:141] libmachine: (ha-476000-m03) Calling .GetSSHUsername
	I0926 17:46:38.426147    3973 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m03/id_rsa Username:docker}
	I0926 17:46:38.454592    3973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 17:46:38.466341    3973 kubeconfig.go:125] found "ha-476000" server: "https://192.169.0.254:8443"
	I0926 17:46:38.466362    3973 api_server.go:166] Checking apiserver status ...
	I0926 17:46:38.466412    3973 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 17:46:38.477532    3973 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1939/cgroup
	W0926 17:46:38.485071    3973 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1939/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0926 17:46:38.485122    3973 ssh_runner.go:195] Run: ls
	I0926 17:46:38.488563    3973 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0926 17:46:38.491741    3973 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0926 17:46:38.491756    3973 status.go:456] ha-476000-m03 apiserver status = Running (err=<nil>)
	I0926 17:46:38.491764    3973 status.go:176] ha-476000-m03 status: &{Name:ha-476000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0926 17:46:38.491776    3973 status.go:174] checking status of ha-476000-m04 ...
	I0926 17:46:38.492058    3973 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:46:38.492087    3973 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:46:38.500818    3973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51785
	I0926 17:46:38.501167    3973 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:46:38.501480    3973 main.go:141] libmachine: Using API Version  1
	I0926 17:46:38.501490    3973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:46:38.501697    3973 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:46:38.501796    3973 main.go:141] libmachine: (ha-476000-m04) Calling .GetState
	I0926 17:46:38.501895    3973 main.go:141] libmachine: (ha-476000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 17:46:38.501963    3973 main.go:141] libmachine: (ha-476000-m04) DBG | hyperkit pid from json: 3636
	I0926 17:46:38.502948    3973 status.go:364] ha-476000-m04 host status = "Running" (err=<nil>)
	I0926 17:46:38.502956    3973 host.go:66] Checking if "ha-476000-m04" exists ...
	I0926 17:46:38.503214    3973 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:46:38.503239    3973 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:46:38.511673    3973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51787
	I0926 17:46:38.512051    3973 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:46:38.512416    3973 main.go:141] libmachine: Using API Version  1
	I0926 17:46:38.512431    3973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:46:38.512639    3973 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:46:38.512756    3973 main.go:141] libmachine: (ha-476000-m04) Calling .GetIP
	I0926 17:46:38.512841    3973 host.go:66] Checking if "ha-476000-m04" exists ...
	I0926 17:46:38.513101    3973 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 17:46:38.513130    3973 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 17:46:38.521550    3973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51789
	I0926 17:46:38.521887    3973 main.go:141] libmachine: () Calling .GetVersion
	I0926 17:46:38.522221    3973 main.go:141] libmachine: Using API Version  1
	I0926 17:46:38.522237    3973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 17:46:38.522454    3973 main.go:141] libmachine: () Calling .GetMachineName
	I0926 17:46:38.522558    3973 main.go:141] libmachine: (ha-476000-m04) Calling .DriverName
	I0926 17:46:38.522697    3973 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0926 17:46:38.522711    3973 main.go:141] libmachine: (ha-476000-m04) Calling .GetSSHHostname
	I0926 17:46:38.522798    3973 main.go:141] libmachine: (ha-476000-m04) Calling .GetSSHPort
	I0926 17:46:38.522880    3973 main.go:141] libmachine: (ha-476000-m04) Calling .GetSSHKeyPath
	I0926 17:46:38.522965    3973 main.go:141] libmachine: (ha-476000-m04) Calling .GetSSHUsername
	I0926 17:46:38.523049    3973 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/ha-476000-m04/id_rsa Username:docker}
	I0926 17:46:38.553047    3973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 17:46:38.564270    3973 status.go:176] ha-476000-m04 status: &{Name:ha-476000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (8.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (42.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 node start m02 -v=7 --alsologtostderr
E0926 17:46:58.795378    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/functional-748000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p ha-476000 node start m02 -v=7 --alsologtostderr: (42.174952851s)
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-476000 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (42.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.49s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (37.41s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-222000 --driver=hyperkit 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-222000 --driver=hyperkit : (37.408244174s)
--- PASS: TestImageBuild/serial/Setup (37.41s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.85s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-222000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-222000: (1.854634428s)
--- PASS: TestImageBuild/serial/NormalBuild (1.85s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.85s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-222000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.85s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.68s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-222000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.68s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.82s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-222000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.82s)

                                                
                                    
x
+
TestJSONOutput/start/Command (77.63s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-038000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit 
E0926 18:03:14.457052    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-038000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit : (1m17.634442742s)
--- PASS: TestJSONOutput/start/Command (77.63s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.49s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-038000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.49s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.46s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-038000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.46s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.36s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-038000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-038000 --output=json --user=testUser: (8.355371659s)
--- PASS: TestJSONOutput/stop/Command (8.36s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.58s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-183000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-183000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (361.978786ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"28bb7b3b-beda-4975-8d6a-0d3326241504","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-183000] minikube v1.34.0 on Darwin 14.6.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"278702f1-e1fd-4e01-af1a-31f1d0146988","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19711"}}
	{"specversion":"1.0","id":"bb30dffb-46b0-406f-af9c-68f389983004","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19711-1128/kubeconfig"}}
	{"specversion":"1.0","id":"cd77106d-6ff3-4a35-a69a-a725d5195258","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"e96b9146-cc78-4521-b7de-df55246d4ed4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"dbc5ab4f-f66a-486e-8517-48957737535b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1128/.minikube"}}
	{"specversion":"1.0","id":"c0b0a0ab-eda6-4c75-b0f8-0095eb1cc53b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0a883029-c2fd-465a-839b-f64414a9500d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-183000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-183000
--- PASS: TestErrorJSONOutput (0.58s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (84.98s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-116000 --driver=hyperkit 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-116000 --driver=hyperkit : (38.021054232s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-128000 --driver=hyperkit 
E0926 18:05:36.889057    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/functional-748000/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-128000 --driver=hyperkit : (37.451568533s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-116000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-128000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-128000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-128000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-128000: (3.393044575s)
helpers_test.go:175: Cleaning up "first-116000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-116000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-116000: (5.255982384s)
--- PASS: TestMinikubeProfile (84.98s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (109.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-108000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit 
multinode_test.go:96: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-108000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit : (1m48.827014808s)
multinode_test.go:102: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-108000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (109.07s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-108000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-108000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-108000 -- rollout status deployment/busybox: (3.181252291s)
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-108000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-108000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-108000 -- exec busybox-7dff88458-bszv7 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-108000 -- exec busybox-7dff88458-p6dk8 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-108000 -- exec busybox-7dff88458-bszv7 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-108000 -- exec busybox-7dff88458-p6dk8 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-108000 -- exec busybox-7dff88458-bszv7 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-108000 -- exec busybox-7dff88458-p6dk8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.82s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-108000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-108000 -- exec busybox-7dff88458-bszv7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-108000 -- exec busybox-7dff88458-bszv7 -- sh -c "ping -c 1 192.169.0.1"
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-108000 -- exec busybox-7dff88458-p6dk8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-108000 -- exec busybox-7dff88458-p6dk8 -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.89s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (45.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-108000 -v 3 --alsologtostderr
E0926 18:10:36.890589    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/functional-748000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-108000 -v 3 --alsologtostderr: (45.599677965s)
multinode_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-108000 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (45.95s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-108000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-108000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-108000 cp testdata/cp-test.txt multinode-108000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-108000 ssh -n multinode-108000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-108000 cp multinode-108000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile1690180015/001/cp-test_multinode-108000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-108000 ssh -n multinode-108000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-108000 cp multinode-108000:/home/docker/cp-test.txt multinode-108000-m02:/home/docker/cp-test_multinode-108000_multinode-108000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-108000 ssh -n multinode-108000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-108000 ssh -n multinode-108000-m02 "sudo cat /home/docker/cp-test_multinode-108000_multinode-108000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-108000 cp multinode-108000:/home/docker/cp-test.txt multinode-108000-m03:/home/docker/cp-test_multinode-108000_multinode-108000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-108000 ssh -n multinode-108000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-108000 ssh -n multinode-108000-m03 "sudo cat /home/docker/cp-test_multinode-108000_multinode-108000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-108000 cp testdata/cp-test.txt multinode-108000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-108000 ssh -n multinode-108000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-108000 cp multinode-108000-m02:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile1690180015/001/cp-test_multinode-108000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-108000 ssh -n multinode-108000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-108000 cp multinode-108000-m02:/home/docker/cp-test.txt multinode-108000:/home/docker/cp-test_multinode-108000-m02_multinode-108000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-108000 ssh -n multinode-108000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-108000 ssh -n multinode-108000 "sudo cat /home/docker/cp-test_multinode-108000-m02_multinode-108000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-108000 cp multinode-108000-m02:/home/docker/cp-test.txt multinode-108000-m03:/home/docker/cp-test_multinode-108000-m02_multinode-108000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-108000 ssh -n multinode-108000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-108000 ssh -n multinode-108000-m03 "sudo cat /home/docker/cp-test_multinode-108000-m02_multinode-108000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-108000 cp testdata/cp-test.txt multinode-108000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-108000 ssh -n multinode-108000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-108000 cp multinode-108000-m03:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile1690180015/001/cp-test_multinode-108000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-108000 ssh -n multinode-108000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-108000 cp multinode-108000-m03:/home/docker/cp-test.txt multinode-108000:/home/docker/cp-test_multinode-108000-m03_multinode-108000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-108000 ssh -n multinode-108000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-108000 ssh -n multinode-108000 "sudo cat /home/docker/cp-test_multinode-108000-m03_multinode-108000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-108000 cp multinode-108000-m03:/home/docker/cp-test.txt multinode-108000-m02:/home/docker/cp-test_multinode-108000-m03_multinode-108000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-108000 ssh -n multinode-108000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-108000 ssh -n multinode-108000-m02 "sudo cat /home/docker/cp-test_multinode-108000-m03_multinode-108000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.44s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-108000 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-darwin-amd64 -p multinode-108000 node stop m03: (2.340145273s)
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-108000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-108000 status: exit status 7 (250.132071ms)

                                                
                                                
-- stdout --
	multinode-108000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-108000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-108000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-108000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-108000 status --alsologtostderr: exit status 7 (250.997604ms)

                                                
                                                
-- stdout --
	multinode-108000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-108000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-108000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 18:11:05.271598    5345 out.go:345] Setting OutFile to fd 1 ...
	I0926 18:11:05.271868    5345 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:11:05.271874    5345 out.go:358] Setting ErrFile to fd 2...
	I0926 18:11:05.271878    5345 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:11:05.272061    5345 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1128/.minikube/bin
	I0926 18:11:05.272240    5345 out.go:352] Setting JSON to false
	I0926 18:11:05.272262    5345 mustload.go:65] Loading cluster: multinode-108000
	I0926 18:11:05.272303    5345 notify.go:220] Checking for updates...
	I0926 18:11:05.272581    5345 config.go:182] Loaded profile config "multinode-108000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 18:11:05.272604    5345 status.go:174] checking status of multinode-108000 ...
	I0926 18:11:05.273038    5345 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 18:11:05.273096    5345 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 18:11:05.282073    5345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53111
	I0926 18:11:05.282456    5345 main.go:141] libmachine: () Calling .GetVersion
	I0926 18:11:05.282885    5345 main.go:141] libmachine: Using API Version  1
	I0926 18:11:05.282894    5345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 18:11:05.283113    5345 main.go:141] libmachine: () Calling .GetMachineName
	I0926 18:11:05.283218    5345 main.go:141] libmachine: (multinode-108000) Calling .GetState
	I0926 18:11:05.283312    5345 main.go:141] libmachine: (multinode-108000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:11:05.283375    5345 main.go:141] libmachine: (multinode-108000) DBG | hyperkit pid from json: 5034
	I0926 18:11:05.284536    5345 status.go:364] multinode-108000 host status = "Running" (err=<nil>)
	I0926 18:11:05.284553    5345 host.go:66] Checking if "multinode-108000" exists ...
	I0926 18:11:05.284792    5345 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 18:11:05.284811    5345 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 18:11:05.293267    5345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53113
	I0926 18:11:05.293621    5345 main.go:141] libmachine: () Calling .GetVersion
	I0926 18:11:05.293938    5345 main.go:141] libmachine: Using API Version  1
	I0926 18:11:05.293949    5345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 18:11:05.294158    5345 main.go:141] libmachine: () Calling .GetMachineName
	I0926 18:11:05.294266    5345 main.go:141] libmachine: (multinode-108000) Calling .GetIP
	I0926 18:11:05.294349    5345 host.go:66] Checking if "multinode-108000" exists ...
	I0926 18:11:05.294611    5345 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 18:11:05.294637    5345 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 18:11:05.303032    5345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53115
	I0926 18:11:05.303365    5345 main.go:141] libmachine: () Calling .GetVersion
	I0926 18:11:05.303690    5345 main.go:141] libmachine: Using API Version  1
	I0926 18:11:05.303706    5345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 18:11:05.303900    5345 main.go:141] libmachine: () Calling .GetMachineName
	I0926 18:11:05.304000    5345 main.go:141] libmachine: (multinode-108000) Calling .DriverName
	I0926 18:11:05.304145    5345 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0926 18:11:05.304161    5345 main.go:141] libmachine: (multinode-108000) Calling .GetSSHHostname
	I0926 18:11:05.304239    5345 main.go:141] libmachine: (multinode-108000) Calling .GetSSHPort
	I0926 18:11:05.304324    5345 main.go:141] libmachine: (multinode-108000) Calling .GetSSHKeyPath
	I0926 18:11:05.304418    5345 main.go:141] libmachine: (multinode-108000) Calling .GetSSHUsername
	I0926 18:11:05.304502    5345 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000/id_rsa Username:docker}
	I0926 18:11:05.340166    5345 ssh_runner.go:195] Run: systemctl --version
	I0926 18:11:05.344547    5345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 18:11:05.355677    5345 kubeconfig.go:125] found "multinode-108000" server: "https://192.169.0.14:8443"
	I0926 18:11:05.355702    5345 api_server.go:166] Checking apiserver status ...
	I0926 18:11:05.355745    5345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 18:11:05.366717    5345 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1898/cgroup
	W0926 18:11:05.374321    5345 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1898/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0926 18:11:05.374366    5345 ssh_runner.go:195] Run: ls
	I0926 18:11:05.377595    5345 api_server.go:253] Checking apiserver healthz at https://192.169.0.14:8443/healthz ...
	I0926 18:11:05.380755    5345 api_server.go:279] https://192.169.0.14:8443/healthz returned 200:
	ok
	I0926 18:11:05.380766    5345 status.go:456] multinode-108000 apiserver status = Running (err=<nil>)
	I0926 18:11:05.380773    5345 status.go:176] multinode-108000 status: &{Name:multinode-108000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0926 18:11:05.380783    5345 status.go:174] checking status of multinode-108000-m02 ...
	I0926 18:11:05.381050    5345 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 18:11:05.381071    5345 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 18:11:05.389777    5345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53119
	I0926 18:11:05.390123    5345 main.go:141] libmachine: () Calling .GetVersion
	I0926 18:11:05.390441    5345 main.go:141] libmachine: Using API Version  1
	I0926 18:11:05.390454    5345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 18:11:05.390655    5345 main.go:141] libmachine: () Calling .GetMachineName
	I0926 18:11:05.390769    5345 main.go:141] libmachine: (multinode-108000-m02) Calling .GetState
	I0926 18:11:05.390847    5345 main.go:141] libmachine: (multinode-108000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:11:05.390920    5345 main.go:141] libmachine: (multinode-108000-m02) DBG | hyperkit pid from json: 5057
	I0926 18:11:05.392087    5345 status.go:364] multinode-108000-m02 host status = "Running" (err=<nil>)
	I0926 18:11:05.392096    5345 host.go:66] Checking if "multinode-108000-m02" exists ...
	I0926 18:11:05.392354    5345 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 18:11:05.392403    5345 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 18:11:05.400904    5345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53121
	I0926 18:11:05.401242    5345 main.go:141] libmachine: () Calling .GetVersion
	I0926 18:11:05.401553    5345 main.go:141] libmachine: Using API Version  1
	I0926 18:11:05.401564    5345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 18:11:05.401762    5345 main.go:141] libmachine: () Calling .GetMachineName
	I0926 18:11:05.401851    5345 main.go:141] libmachine: (multinode-108000-m02) Calling .GetIP
	I0926 18:11:05.401927    5345 host.go:66] Checking if "multinode-108000-m02" exists ...
	I0926 18:11:05.402173    5345 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 18:11:05.402198    5345 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 18:11:05.410587    5345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53123
	I0926 18:11:05.410919    5345 main.go:141] libmachine: () Calling .GetVersion
	I0926 18:11:05.411281    5345 main.go:141] libmachine: Using API Version  1
	I0926 18:11:05.411297    5345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 18:11:05.411507    5345 main.go:141] libmachine: () Calling .GetMachineName
	I0926 18:11:05.411621    5345 main.go:141] libmachine: (multinode-108000-m02) Calling .DriverName
	I0926 18:11:05.411754    5345 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0926 18:11:05.411765    5345 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHHostname
	I0926 18:11:05.411856    5345 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHPort
	I0926 18:11:05.411935    5345 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHKeyPath
	I0926 18:11:05.412020    5345 main.go:141] libmachine: (multinode-108000-m02) Calling .GetSSHUsername
	I0926 18:11:05.412090    5345 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1128/.minikube/machines/multinode-108000-m02/id_rsa Username:docker}
	I0926 18:11:05.444292    5345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 18:11:05.454107    5345 status.go:176] multinode-108000-m02 status: &{Name:multinode-108000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0926 18:11:05.454126    5345 status.go:174] checking status of multinode-108000-m03 ...
	I0926 18:11:05.454410    5345 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 18:11:05.454432    5345 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 18:11:05.463215    5345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53126
	I0926 18:11:05.463606    5345 main.go:141] libmachine: () Calling .GetVersion
	I0926 18:11:05.463917    5345 main.go:141] libmachine: Using API Version  1
	I0926 18:11:05.463932    5345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 18:11:05.464149    5345 main.go:141] libmachine: () Calling .GetMachineName
	I0926 18:11:05.464267    5345 main.go:141] libmachine: (multinode-108000-m03) Calling .GetState
	I0926 18:11:05.464351    5345 main.go:141] libmachine: (multinode-108000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:11:05.464416    5345 main.go:141] libmachine: (multinode-108000-m03) DBG | hyperkit pid from json: 5127
	I0926 18:11:05.465568    5345 main.go:141] libmachine: (multinode-108000-m03) DBG | hyperkit pid 5127 missing from process table
	I0926 18:11:05.465612    5345 status.go:364] multinode-108000-m03 host status = "Stopped" (err=<nil>)
	I0926 18:11:05.465621    5345 status.go:377] host is not running, skipping remaining checks
	I0926 18:11:05.465625    5345 status.go:176] multinode-108000-m03 status: &{Name:multinode-108000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.84s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (36.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-108000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-darwin-amd64 -p multinode-108000 node start m03 -v=7 --alsologtostderr: (36.11521157s)
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-108000 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (36.50s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (188.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-108000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-108000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-108000: (18.872683129s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-108000 --wait=true -v=8 --alsologtostderr
E0926 18:13:14.460686    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-108000 --wait=true -v=8 --alsologtostderr: (2m49.58540009s)
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-108000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (188.57s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (3.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-108000 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-darwin-amd64 -p multinode-108000 node delete m03: (3.036908732s)
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-108000 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (3.37s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (16.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-108000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-amd64 -p multinode-108000 stop: (16.620049093s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-108000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-108000 status: exit status 7 (80.501443ms)

                                                
                                                
-- stdout --
	multinode-108000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-108000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-108000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-108000 status --alsologtostderr: exit status 7 (79.642068ms)

                                                
                                                
-- stdout --
	multinode-108000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-108000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 18:15:10.670156    5492 out.go:345] Setting OutFile to fd 1 ...
	I0926 18:15:10.670353    5492 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:15:10.670359    5492 out.go:358] Setting ErrFile to fd 2...
	I0926 18:15:10.670363    5492 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:15:10.670536    5492 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1128/.minikube/bin
	I0926 18:15:10.670735    5492 out.go:352] Setting JSON to false
	I0926 18:15:10.670757    5492 mustload.go:65] Loading cluster: multinode-108000
	I0926 18:15:10.670799    5492 notify.go:220] Checking for updates...
	I0926 18:15:10.671094    5492 config.go:182] Loaded profile config "multinode-108000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 18:15:10.671113    5492 status.go:174] checking status of multinode-108000 ...
	I0926 18:15:10.671531    5492 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 18:15:10.671576    5492 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 18:15:10.680407    5492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53357
	I0926 18:15:10.680778    5492 main.go:141] libmachine: () Calling .GetVersion
	I0926 18:15:10.681186    5492 main.go:141] libmachine: Using API Version  1
	I0926 18:15:10.681195    5492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 18:15:10.681392    5492 main.go:141] libmachine: () Calling .GetMachineName
	I0926 18:15:10.681497    5492 main.go:141] libmachine: (multinode-108000) Calling .GetState
	I0926 18:15:10.681571    5492 main.go:141] libmachine: (multinode-108000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:15:10.681641    5492 main.go:141] libmachine: (multinode-108000) DBG | hyperkit pid from json: 5408
	I0926 18:15:10.682563    5492 main.go:141] libmachine: (multinode-108000) DBG | hyperkit pid 5408 missing from process table
	I0926 18:15:10.682602    5492 status.go:364] multinode-108000 host status = "Stopped" (err=<nil>)
	I0926 18:15:10.682616    5492 status.go:377] host is not running, skipping remaining checks
	I0926 18:15:10.682620    5492 status.go:176] multinode-108000 status: &{Name:multinode-108000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0926 18:15:10.682637    5492 status.go:174] checking status of multinode-108000-m02 ...
	I0926 18:15:10.682929    5492 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0926 18:15:10.682957    5492 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0926 18:15:10.691388    5492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53359
	I0926 18:15:10.691756    5492 main.go:141] libmachine: () Calling .GetVersion
	I0926 18:15:10.692101    5492 main.go:141] libmachine: Using API Version  1
	I0926 18:15:10.692119    5492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 18:15:10.692325    5492 main.go:141] libmachine: () Calling .GetMachineName
	I0926 18:15:10.692440    5492 main.go:141] libmachine: (multinode-108000-m02) Calling .GetState
	I0926 18:15:10.692527    5492 main.go:141] libmachine: (multinode-108000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0926 18:15:10.692594    5492 main.go:141] libmachine: (multinode-108000-m02) DBG | hyperkit pid from json: 5421
	I0926 18:15:10.693485    5492 main.go:141] libmachine: (multinode-108000-m02) DBG | hyperkit pid 5421 missing from process table
	I0926 18:15:10.693528    5492 status.go:364] multinode-108000-m02 host status = "Stopped" (err=<nil>)
	I0926 18:15:10.693539    5492 status.go:377] host is not running, skipping remaining checks
	I0926 18:15:10.693542    5492 status.go:176] multinode-108000-m02 status: &{Name:multinode-108000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (16.78s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (46.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-108000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-108000-m02 --driver=hyperkit 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-108000-m02 --driver=hyperkit : exit status 14 (413.862956ms)

                                                
                                                
-- stdout --
	* [multinode-108000-m02] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1128/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1128/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-108000-m02' is duplicated with machine name 'multinode-108000-m02' in profile 'multinode-108000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-108000-m03 --driver=hyperkit 
multinode_test.go:472: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-108000-m03 --driver=hyperkit : (40.249931562s)
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-108000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-108000: exit status 80 (278.359408ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-108000 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-108000-m03 already exists in multinode-108000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-108000-m03
E0926 18:18:14.518144    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-108000-m03: (5.267489598s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (46.27s)

                                                
                                    
x
+
TestPreload (181.35s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-688000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4
E0926 18:18:40.026155    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/functional-748000/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-688000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4: (1m57.929140237s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-688000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-688000 image pull gcr.io/k8s-minikube/busybox: (1.553417999s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-688000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-688000: (8.400851148s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-688000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit 
E0926 18:20:36.949531    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/functional-748000/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-688000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit : (48.064153077s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-688000 image list
helpers_test.go:175: Cleaning up "test-preload-688000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-688000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-688000: (5.240918746s)
--- PASS: TestPreload (181.35s)

                                                
                                    
x
+
TestSkaffold (113.59s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe252424944 version
skaffold_test.go:59: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe252424944 version: (1.746922009s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-729000 --memory=2600 --driver=hyperkit 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-729000 --memory=2600 --driver=hyperkit : (35.929984681s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe252424944 run --minikube-profile skaffold-729000 --kube-context skaffold-729000 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe252424944 run --minikube-profile skaffold-729000 --kube-context skaffold-729000 --status-check=true --port-forward=false --interactive=false: (57.911197816s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-775b76f8c4-wd6fm" [e0f044f6-2cf9-43ea-be2e-0cb2c222e25e] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003523042s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-7b976695b-c72vk" [915d018a-42f8-409f-abac-d444daedda6a] Running
E0926 18:25:36.953932    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/functional-748000/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003999463s
helpers_test.go:175: Cleaning up "skaffold-729000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-729000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-729000: (5.253321421s)
--- PASS: TestSkaffold (113.59s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (81.67s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.3635852454 start -p running-upgrade-048000 --memory=2200 --vm-driver=hyperkit 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.3635852454 start -p running-upgrade-048000 --memory=2200 --vm-driver=hyperkit : (52.068260905s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-048000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
E0926 18:39:37.621086    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-048000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (22.654649312s)
helpers_test.go:175: Cleaning up "running-upgrade-048000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-048000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-048000: (5.248988289s)
--- PASS: TestRunningBinaryUpgrade (81.67s)

                                                
                                    
x
+
TestKubernetesUpgrade (1368.51s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-398000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:222: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-398000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit : (51.760815179s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-398000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-398000: (8.394049712s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-398000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-398000 status --format={{.Host}}: exit status 7 (68.585578ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-398000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=hyperkit 
E0926 18:43:14.530543    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/client.crt: no such file or directory" logger="UnhandledError"
E0926 18:45:29.677227    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/skaffold-729000/client.crt: no such file or directory" logger="UnhandledError"
E0926 18:45:36.964963    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/functional-748000/client.crt: no such file or directory" logger="UnhandledError"
E0926 18:46:52.757593    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/skaffold-729000/client.crt: no such file or directory" logger="UnhandledError"
E0926 18:48:14.533887    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/client.crt: no such file or directory" logger="UnhandledError"
E0926 18:50:29.744034    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/skaffold-729000/client.crt: no such file or directory" logger="UnhandledError"
E0926 18:50:37.031561    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/functional-748000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-398000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=hyperkit : (11m4.015692731s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-398000 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-398000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-398000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit : exit status 106 (591.882573ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-398000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1128/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1128/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-398000
	    minikube start -p kubernetes-upgrade-398000 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3980002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-398000 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-398000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=hyperkit 
E0926 18:52:00.112877    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/functional-748000/client.crt: no such file or directory" logger="UnhandledError"
E0926 18:53:14.600919    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/client.crt: no such file or directory" logger="UnhandledError"
E0926 18:55:29.746247    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/skaffold-729000/client.crt: no such file or directory" logger="UnhandledError"
E0926 18:55:37.032348    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/functional-748000/client.crt: no such file or directory" logger="UnhandledError"
E0926 18:56:17.696494    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/client.crt: no such file or directory" logger="UnhandledError"
E0926 18:58:14.602389    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/client.crt: no such file or directory" logger="UnhandledError"
E0926 19:00:29.749484    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/skaffold-729000/client.crt: no such file or directory" logger="UnhandledError"
E0926 19:00:37.035420    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/functional-748000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-398000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=hyperkit : (10m38.376475841s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-398000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-398000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-398000: (5.253642908s)
--- PASS: TestKubernetesUpgrade (1368.51s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.11s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin
- MINIKUBE_LOCATION=19711
- KUBECONFIG=/Users/jenkins/minikube-integration/19711-1128/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3278690279/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3278690279/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3278690279/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3278690279/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.11s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (7.12s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin
- MINIKUBE_LOCATION=19711
- KUBECONFIG=/Users/jenkins/minikube-integration/19711-1128/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1554347159/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1554347159/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1554347159/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1554347159/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (7.12s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.00s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (165.78s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.2016734191 start -p stopped-upgrade-628000 --memory=2200 --vm-driver=hyperkit 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.2016734191 start -p stopped-upgrade-628000 --memory=2200 --vm-driver=hyperkit : (41.319134626s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.2016734191 -p stopped-upgrade-628000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.2016734191 -p stopped-upgrade-628000 stop: (8.251536529s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-628000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
E0926 19:03:14.605088    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/addons-433000/client.crt: no such file or directory" logger="UnhandledError"
E0926 19:03:32.833703    1679 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1128/.minikube/profiles/skaffold-729000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-628000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (1m56.206227191s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (165.78s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.75s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-628000
version_upgrade_test.go:206: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-628000: (2.750057114s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-007000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-007000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit : exit status 14 (403.056454ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-007000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1128/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1128/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (40.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-007000 --driver=hyperkit 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-007000 --driver=hyperkit : (40.005587763s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-007000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (40.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-007000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-007000 --no-kubernetes --driver=hyperkit : (15.274799084s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-007000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-007000 status -o json: exit status 2 (152.77115ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-007000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-007000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-007000: (2.389759531s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (18.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-007000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-007000 --no-kubernetes --driver=hyperkit : (18.836245326s)
--- PASS: TestNoKubernetes/serial/Start (18.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-007000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-007000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (133.395701ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-007000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-007000: (2.376471002s)
--- PASS: TestNoKubernetes/serial/Stop (2.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (19.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-007000 --driver=hyperkit 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-007000 --driver=hyperkit : (19.407805425s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (19.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-007000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-007000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (134.822281ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.13s)

                                                
                                    

Test skip (18/217)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard